Before deploying a cluster running Deckhouse Kubernetes Platform, you have to plan the configuration of the future cluster and decide on the parameters if its nodes (e. g., RAM, CPU, etc.).

Installation Planning

Before deploying a cluster, you need to plan for the resources that you might need to run the cluster. The following questions will help you plan ahead:

  • What is the expected load?
  • Does your cluster require a high load mode?
  • Does your cluster require a high availability mode?
  • Which DKP modules do you intend to use?

The answers to these questions can help you estimate the number of nodes recommended for your cluster deployment. See Deployment Scenarios to learn more.

The information below applies to a Deckhouse Kubernetes Platform installation running the Default module set.

Deployment Scenarios

This section helps you estimate the resources required for the cluster based on the expected load.

Cluster configuration Master nodes Worker nodes Frontend nodes System nodes Monitoring nodes
Minimum 1 at least 1
Typical 3 at least 1 2 2 -
Increased load 3 at least 1 2 2 2

Where:

  • master nodes — nodes that manage the cluster
  • worker nodes — these nodes are used to run user applications
  • frontend nodes — nodes that balance incoming traffic; Ingress controllers run on them
  • system nodes — these nodes are intended to run Deckhouse modules
  • monitoring nodes — these nodes are used to run user applications

See Configuration Features of the “Going to Production” section for details on these node types.

Features of the configurations listed in the table above:

  • Minimum — Minimum cluster configuration is suitable for small, light-load projects with low reliability requirements. It is up to you to define the characteristics of the worker node based on the expected user load. Note that in this configuration, some of the DKP components will also run on the worker node.

    Such a cluster configuration is risky because if a single master node fails, the entire cluster will be affected.

  • Typical — This is the recommended configuration that can tolerate the failure of two master nodes. It greatly improves service availability.
  • Increased load — Unlike the typical configuration, this configuration includes dedicated monitoring nodes, enabling a high level of observability in the cluster even under high loads.

Deciding on the amount of resources needed for nodes

Requirement level Node type CPU (pcs) RAM (GB) Disk space (GB)
Minimum

The way the cluster will run on minimum requirement nodes largely depends on which DKP modules are enabled.
We recommend increasing node resources if the number of enabled modules is large.

Master node 4 8 60
Worker node 4 8 60
Frontend node 2 4 50
Monitoring node 4 8 50
System node 2 4 50
System node (if no dedicated monitoring nodes are running in the cluster) 4 8 60
Production

Master node 8 16 60
Worker node 4 12 60
Frontend node 2 4 50
Monitoring node 4 8 50
System node 6 12 50
System node (if no dedicated monitoring nodes are running in the cluster) 8 16 60
Single master node cluster Master node 6 12 60
  • The parameters of worker nodes are largely dictated by the nature of the workload running on the node(s), the table lists the minimum requirements.
  • Note that all nodes require high performance disks (400+ IOPS).

Single master node cluster

Such clusters lack fault tolerance. We highly advise you against using this kind of clusters in production environments.

In some cases, a single-node cluster is enough. In this case, the node will take care of all the node roles described above. For example, this may be useful if you just want to familiarize yourself with the technology or run some fairly lightweight workloads.

The Getting Started guide contains instructions for deploying a single master node cluster. Once you un-taint the node, it will run all cluster components included in the selected module bundle (bundle: Default by default). To successfully run a cluster in this mode, you will need at least 16 CPUs, 32GB of RAM, and 60GB of disk space on a performant disk (400+ IOPS). Such a configuration would allow some workloads to be run.

With this configuration, a load of 2500 RPS on a typical web application (e.g., a static Nginx page) consisting of 30 pods, and incoming traffic of 24 Mbps will result in approximately the following resource consumption figures:

  • CPU load will increase to ~60% in total
  • RAM and disk resource consumption figures will remain largely unchanged. In the end, however, it comes down to the number of metrics collected and the nature of the workload being run

We recommend load testing your application and adjusting the server capacity accordingly.

Node Hardware Requirements

The machines you intend to turn into nodes of your future cluster must meet the following requirements:

  • CPU architecture — all nodes must be of the x86_64 CPU architecture
  • Identical nodes — all nodes of the same type must have the same hardware configuration. Nodes must be of the same make and model with the same CPU, memory, and storage
  • Network interfaces — each node must have at least one network interface for the routed network

Network Requirements

  • Nodes must be able to access each other over the network. The network policies must be met.
  • There are no MTU requirements.
  • Each node must have a permanent IP address. If you use a DHCP server to assign IP addresses to nodes, you must configure the DHCP server to explicitly assign addresses to each node. Changing the IP addresses of the nodes is undesirable.
  • Master nodes must be able to access time servers external to the cluster via NTP. Cluster nodes use master nodes to synchronize time, but can also synchronize with other time servers (see the ntpServers parameter).

Community

Join our Telegram channel to stay up to date.

Join the Deckhouse community for updates on important developments and news. There, you will be able to chat with others and learn from their experiences. This way, you can avoid many common mistakes.

The Deckhouse team knows firsthand the dedication it takes to set up and orchestrate a production Kubernetes cluster. We’re thrilled if Deckhouse empowers you to bring your vision to life. Share your journey and ignite others to embark on their own Kubernetes endeavors!