Deckhouse Platform on OpenStack

Configure cluster

This template is used for system apps domains within the cluster, e.g., Grafana for %s.example.com will be available as grafana.example.com.
This prefix is used for names of cluster objects created by Deckhouse (virtual machines, networks, security policies, etc.).
This key is passed to the cloud provider during the virtual machine creation process.

Select layout

Layout — the way how resources are located in the cloud. There are several pre-defined layouts.

resources

In this scheme, an internal cluster network is created with a gateway to the public network; the nodes do not have public IP addresses. Note that the floating IP is assigned to the master node.

Caution If the provider does not support SecurityGroups, all applications running on nodes with floating IPs assigned will be available at a public IP. For example, kube-apiserver on master nodes will be available on port 6443. To avoid this, we recommend using the SimpleWithInternalNetwork placement strategy.

resources

The master node and cluster nodes are connected to the existing network. This placement strategy might come in handy if you need to merge a Kubernetes cluster with existing VMs.

Caution!

This placement strategy does not involve the management of SecurityGroups (it is assumed they were created beforehand). To configure security policies, you must explicitly specify both additionalSecurityGroups in the OpenStackClusterConfiguration for the masterNodeGroup and other nodeGroups, and additionalSecurityGroups when creating OpenStackInstanceClass in the cluster.

Select preset

Preset — the structure of nodes in the cluster. There are several pre-defined presets.

  • The cluster consists of one master node and one worker node.
  • Kubernetes Control Plane and Deckhouse controller run on the master node.
  • Deckhouse deploys other components (Ingress Controller, Prometheus, cert-manager, etc.) on the worker node.
  • Your applications should run on the worker node.
  • Highly Available Kubernetes Control Plane.
  • The cluster consists of three master nodes and two worker nodes.
  • Kubernetes Control Plane and Deckhouse controller run on master nodes.
  • Deckhouse deploys other components (Ingress Controller, Prometheus, cert-manager, etc.) on the worker nodes.
  • Your applications should run on the worker nodes.
  • Highly Available Kubernetes Control Plane.
  • The cluster consists of three master nodes, two system nodes, several frontend nodes, and one worker node.
  • Kubernetes Control Plane and Deckhouse controller run on master nodes.
  • Deckhouse deploys system components (Prometheus, cert-manager, etc.) on system nodes.
  • Deckhouse deploys Ingress Controller on frontend nodes. The number of frontend nodes depends on the number of availability zones in a cloud provider.
  • Your applications should run on the worker node.