The product

NoOps Kubernetes platform

Deckhouse is a Kubernetes platform that allows you to create homogeneous K8s clusters on any infrastructure. It manages clusters comprehensively and “automagically” and provides all necessary modules and add-ons for autoscaling, observability, security, and service mesh implementation. Deckhouse has vanilla Kubernetes under the hood and integrates a balanced set of Open Source tools that have become the industry standard.

Deckhouse is CNCF certified.

Why us

Why do I need Deckhouse Platform?

  • Out-of-the-box secure configuration of the Kubernetes cluster: least component privileges, pre-configured role model, end-to-end object identity in the audit system, integration with external directory services.
  • Best practice compliance: regular infrastructure checks to ensure CIS Benchmark compliance, scanning images for vulnerabilities.
  • Network security: managing network policies in one place, monitoring of all incoming and outgoing connections, visualizing network activities.
  • Control of running applications: built-in implementation of Pod Security Standards and a ready-to-use, extensible set of recommended policies.
  • Security event auditing and logging: an out-of-the-box expandable set of rules for identifying security events, flexible filtering of events sent to the SIEM system, and sending notifications about security incidents to IS personnel.
  • Focus on application development. Your infrastructure simply becomes a well-known, transparent API that can be comprehensively managed anywhere (any clouds, bare metal servers or hybrid).
  • Build and update apps quickly. Benefit from a true CI/CD enabler.
  • Deploy & test new features in unlimited fashion: create new environments whenever you want.
  • Enjoy automated canary deployments and service mesh with Istio.
  • Boost your observability with easy-to-use dashboards.
  • Use custom metrics for out-of-the-box autoscaling.
  • Forget about low-level Kubernetes machinery and focus on real DevOps and SRE, rather than the infrastructure itself.
  • Run ready-to-use Kubernetes clusters in less than 10 minutes.
  • Roll out completely identical clusters anywhere: bare metal servers, private clouds, or public clouds.
  • Leverage integrated and ready-to-use HPA for your apps.
  • Never miss important updates from upstream Kubernetes, certificate renewals, etc.
  • Get out-of-the-box security features including single sign-on, support for external authentication providers, and role-based access control.
  • Leverage the latest containerization technology to build cutting edge services and products.
  • Accelerate time-to-market with properly rolled out & maintained Kubernetes.
  • Boost team efficiency 300% by equipping them with the best OSS tools for Kubernetes.
  • Save on infrastructure with automatic downscaling. Consume only what you need right now.
  • Get a CNCF-certified platform based on upstream Kubernetes and an Open Source core with no vendor lock-in.
Features

What makes Deckhouse Platform special?

NoOps

Deckhouse automates many routine deployment, scaling and infrastructure management operations out of the box. It manages system software on the nodes (kernel, CRI, kubelet), basic Kubernetes components (control plane, etcd, certificates, etc.) and its own modules.

On top of that, Deckhouse automatically updates all cluster components (Kubernetes, DH modules, external tools like Istio and Grafana) within a month of upstream upgrades.

All you have to do to upgrade to the next minor Kubernetes version is to edit a line in the configuration file (upgrading to patch versions is automatic).

SLA BY DESIGN

By incorporating the NoOps approach and everything undergoing careful testing prior to release, our cornerstone is reliability.

Thus, we can provide SLA even without direct access to your infrastructure — 99.95%*.

* To enable this, the architecture of your clusters must first be approved by our engineers and our guidelines must be followed.
* This applies to Enterprise Edition only.

RUNS ANYWHERE

Clusters are infrastructure-agnostic since they can be deployed on a public cloud of your choice (AWS, GCP, Microsoft Azure, OVH Cloud) or even bare metal servers.

Self-hosted cloud solutions (OpenStack and vSphere) are supported as well*.

Kubernetes clusters created with Deckhouse are entirely identical no matter which underlying infrastructure is used. All platform features & modules are available everywhere.

* This applies to Enterprise Edition only.

INDUSTRY TRUSTED SOLUTION

It’s 100% vanilla Kubernetes since we follow the upstream version of Kubernetes.

Based on shell-operator & addon-operator Open Source projects that have been around for 2+ years and adopted by a variety of vendors.

Avoid vendor lock-in with Open Source (and free) core of the platform.

The EE edition development process is also public (on GitHub) and all source code is open (but not free).

KUBERNETES, THE EASY WAY

Deploying Deckhouse is easy as can be: a couple of CLI commands and 8 minutes and you’ve got production-ready Kubernetes.

Ready-to-use configurations for each cloud provider available — just choose the one that suits you the best.

Even deploying on bare metal is no longer a big deal.

Getting started
A FULLY-FEATURED PLATFORM

Get auto-scaling, observability, security, and a service mesh right out of the box!

Kubernetes has a lot to offer, but that route is accompanied by further configuration, integration, and maintenance, which becomes quite a feat to handle. Do it the easy way with Deckhouse!

Our basic principles are fully-featured Kubernetes, total integrity in all the components, and simplicity without impeding flexibility. Learn more about the advantages of Deckhouse over EKS, GKE, AKS, and other KaaS platforms

For a more comprehensive understanding of the modules available in Deckhouse, read the documentation.

Deckhouse VS kaas
SECURITY

Deckhouse provides an advanced set of tools to create a genuinely secure production environment.

We use secure software for all platform components. All images get pulled strictly from the Deckhouse repository. A set of strict policies and restrictions controls the interaction of cluster components and the platform.

The features of the platform include: secure access to the cluster and components; flexible management of network policies; auditing K8s events; managing TLS certificates; automatic updating of platform and cluster components; monitoring CVEs for the software used.

Learn more about security

Got questions? Request a callback!

Please enter your phone number. We will call you in 24 hours

By submitting form, you agree to the Privacy Policy.
Testimonials

Success stories from our clients

Distributed infrastructure management

The retailer had a bunch of Kubernetes clusters hosted in various data centers and clouds. The challenge was to find a cost-effective way to manage its infrastructure.

Leroy Merlin Russia has selected Deckhouse Platform as a tool to manage all their clusters in 4 different data centers and clouds (OpenStack, vSphere, Yandex.Cloud). Ever since it was put into implementation, the customer has benefitted from enormous observability and single-window control.

High loads are no longer a problem

All of SimpleTexting’s services were based on virtual machines and had to ensure that high loads could be supported without any hiccups.

SimpleTexting selected Deckhouse as the most convenient NoOps Kubernetes platform, in particular enjoying the advantages of autoscaling as traffic continues to spike.

Kubernetes from the first sight

Lalafo had never used Kubernetes before, but they were having trouble handling high loads and were in search of a failover solution, so they decided to give it a shot.

The Deckhouse clusters in lalafo have ensured the functionality of the failover structure which utilizes the full power of Istio to organize a service mesh.

Scalability intacked

3Commas had deployed all of its microservices on VMs but it was held back by a lack of scalability and robustness in the event of rapid traffic influxes.

3Commas had deployed all of its microservices on VMs but it was held back by a lack of scalability and robustness in the event of rapid traffic influxes.