How is Deckhouse different from Amazon EKS, GKE, AKS and other Kubernetes services?

Global and regional cloud providers have various Kubernetes as a service (KaaS) offerings. Is it possible to compare the Deckhouse Platform with KaaS? The answer is no, and there are many reasons for it. Allow us to list the essential ones.

Note: below we compare only those aspects of KaaS platforms and Deckhouse that overlap in functionality.

Deckhouse will soon be able to work “on top of” EKS, AKS, GKE, and other KaaS platforms

More than just a cluster

Common KaaS services are essentially DIY kits that involve plenty of manual activities.

KaaS services are basically “vanilla” Kubernetes running in the provider’s cloud. The KaaS user manages most of the cluster components himself. Depending on the provider, the user may be responsible for upgrading nodes by recreating virtual machines using updated images provided by the provider. The user is also responsible for installing and managing third-party solutions essential for the proper functioning of the production cluster.

Fully automated solution

Deckhouse is a feature-complete platform that includes additional modules for monitoring, traffic balancing, autoscaling, secure access, etc., in addition to “vanilla” Kubernetes. The modules are pre-configured, integrated with each other, and ready to use. All cluster and platform components are managed (and updated) in a fully automated fashion.

Hybrid infrastructure

Deckhouse supports hybrid infrastructure for K8s. You can create clusters in the clouds of different vendors and manage them as a unified infrastructure.

No vendor lock-in

Deckhouse is independent of the cloud infrastructure provider. You can migrate your cluster between various clouds.

Same management API for any cloud

The Deckhouse user manages Kubernetes via the pure Kubernetes API instead of using custom APIs of providers. For this, it uses the Custom Resources mechanism.


Most cloud providers maintain only the Control Plane components as part of their managed offerings. That is, they support and upgrade software related to etcd, controller-manager, API server, etc. Thus, the user manages K8s node-related software as well as all the auxiliary modules. Such a service model is called “Shared responsibility”; AKS, GKE, AWS use it.

In contrast, Deckhouse manages all the components of Kubernetes and the platform:
it configures/upgrades them and keeps their configurations up-to-date.


Building blocks

The operation process

Building blocks

The operation process
Control Plane: etcd, controller-manager, scheduler, API server, cloud-controller-managerProviderProviderDeckhouse
CNI (container network interface)ProviderUserDeckhouse
CSI (container storage interface)ProviderUserDeckhouse
Cluster-autoscaler supportProviderUserDeckhouse
Network infrastructure*ProviderUserDeckhouse
Technical supportProvider****UserFlant
part of the contract

* Building blocks: cloud elements, such as VPC, virtual router, network policy, etc. The operation process: installing and configuring all components and their relationships via the API or web interface.
** Building blocks: monitoring platform, system software, recommended settings. The operation process: installing, configuring, and maintaining software.
*** Building blocks: new versions of the system software, new configuration examples. The operation process: software and settings updates.
**** Part of the contract; however, it is not included in the cost of resources and is charged separately

Interested in getting Deckhouse now?

Seamless automatic upgrades

Cloud providers support no-downtime background upgrades only for some Control Plane components. The user is responsible for upgrading all other cluster components; the provider provides no guarantees that the containers will run smoothly.

Deckhouse upgrades Control Plane (and all its parts), K8s components running in containers, and software on nodes on the fly. Deckhouse can automatically upgrade the Linux kernel/runtime environment. However, these upgrades involve downtime that the user can control.

Deckhouse provides five release channels with various stability levels, starting with Alpha and ending with the most stable Rock Solid channel. The user can switch between release channels.

Learn more

Supported kernel/runtime
environment upgrade modes

  • At any time.
  • During pre-defined periods (coming soon).
  • After confirmation by the user.

Responsibility zone


Responsibility zone

Control Plane: etcd, controller-manager, scheduler, API server, cloud-controller-managerProviderNoDeckhouseNo
CNI (container network interface)UserPossibleDeckhouseNo
CSI (container storage interface)UserPossibleDeckhouseNo
Network infrastructure*UserPossibleDeckhouseNo

* Nodes get re-created, containers are migrated to new nodes.
** Except for the Linux kernel and incompatible container runtime upgrades.

Simplicity and convenience

KaaS: each provider has its own proprietary API

Each cloud provider has its own tools for managing cloud and K8s resources: API, CLI utilities, web interface. The user uses them to manage the cloud, cluster, and all the linked infrastructure. kubectl only allows you to interact with containers: create/delete them or run in the cluster. The user is forced to use the provider’s interface to, e.g., add nodes to the cluster, remove them or update the system software.

Deckhouse: a single, well-known API in any cloud

Deckhouse uses the classic Kubernetes API to manage K8s, additional components, and low-level infrastructure. At the same time, you can use any standard tools: kubectl, Helm, GitOps utilities like werf or Argo CD.


Deckhouse offers several features that are not supported/partially supported by cloud providers.

At least 5 Kubernetes versions

The user can select either the latest K8s release or any of the four preceding ones.

Flexible Control Plane management

Deckhouse offers several Control Plane placement strategies to ensure high availability. The strategies differ in their reliability:

  • a single virtual machine (VM);
  • a cluster consisting of 3 VMs located in the same or different zones;
  • a cluster consisting of 5 VMs; in this case, 3 VMs are used for the etcd store, and the remaining two are used for the scheduler, API server, cloud-controller-manager.

Managing Feature Gates (coming soon)

Using the Feature Gates option, the Deckhouse user can enable and disable feature sets relevant to Kubernetes components, including experimental ones: alpha, beta, etc. Deckhouse supports all the features outlined in the Kubernetes documentation.

Hybrid clusters

You can deploy your cluster both on bare-metal servers and virtual machines (VMs). Note that Deckhouse can automatically scale the bare-metal cluster using VMs. It might come
in handy when you need to scale up your computing power quickly, but there is no reserve server available.

External authentication and authorization

For most KaaS providers, access to the Kubernetes cluster is carried out using the proprietary provider’s authentication means. Suppose you use an external LDAP service to access the cluster. If you migrate to KaaS, you will not be able to pair the provider’s K8s infrastructure with the LDAP service.

Deckhouse avoids this problem by implementing two functions for accessing the API server: OIDC support for authentication and webhook support for authorization.

Cilium support

Deckhouse provides official support for the cilium-based K8s network infrastructure.

Cilium is an Open Source service that improves connectivity, observability, and security of the Kubernetes network infrastructure. Features of cilium:

  • high performance and low latency;
  • highly flexible and scalable (up to 5000 nodes in the cluster);
  • efficient traffic balancing;
  • transparent cluster interaction;
  • advanced network access control;
  • additional cryptographic protection and much more.

Advanced autoscaling capabilities

Standby instances

In addition to the standard pre-configured autoscaling, you can use the so-called standby (reserve) instances. A standby instance is a virtual machine that is ready to return to service and join active VMs. The user chooses the number of standby instances. Suppose the percentage of standby instances is 10. In this case, if the number of worker nodes is increased from 10 to 50 as a result of autoscaling, the number of standby instances is also increased from 1 to 5.

Standby instances improve the cluster stability when load sharply increases: the reserve VMs are running and can be added to the cluster instantly if necessary. At the same time, it takes 2-3 minutes to start regular VMs and add them to the cluster (a deadly delay for some services).

Preemptible VMs

Deckhouse supports spot/preemptible VMs. These are cheaper than regular ones and are provisioned based on the “auction” results. Such VMs have lower fault tolerance.

Distributing VMs between zones uniformly

You can distribute nodes of the cluster across several zones without losing performance, thus increasing its fault tolerance.

Ready to start?

Fill out this form, and we will send you access credentials via email

By submitting form, you agree to the Privacy Policy.