Deckhouse Platform in Microsoft Azure

Caution! Only regions where Availability Zones are available are supported.

Configure cluster

Enter a domain name template containing %s, e.g., %s.domain.my or %s-kube.domain.my. Please don't use the example.com domain name. This template is used for system apps domains within the cluster, e.g., Grafana for %s.domain.my will be available as grafana.domain.my.
This tutorial assumes the use of a public domain pointing to a public cluster address. It is necessary to obtain Let's Encrypt certificates for Deckhouse services. If the existing certificates (including Self-Signed ones) are used, you need to change the global settings in the modules.https section.
We recommend using the nip.io service (or similar) for testing if wildcard DNS records are unavailable to you for some reason.
This prefix is used for naming cluster objects created by Deckhouse (virtual machines, networks, security policies, etc.).
This key is passed to the cloud provider during the virtual machine creation process.

Select layout

Layout is the way how resources are located in the cloud. There are several pre-defined layouts.

  • A separate resorce group is created for the cluster.
  • By default, one external IP address is dynamically allocated to each instance (it is used for Internet access only). Each IP has 64000 ports available for SNAT.
  • The NAT Gateway (pricing) is supported. With it, you can use static public IP addresses for SNAT.
  • Public IP addresses can be assigned to master nodes and nodes created by Terraform.
  • If the master does not have a public IP, then an additional instance with a public IP (aka bastion host) is required for installation tasks and access to the cluster. In this case, you will also need to configure peering between the cluster’s VNet and bastion’s VNet.
  • Peering can also be configured between the cluster VNet and other VNets.

Select preset

Preset is the structure of nodes in the cluster. There are several pre-defined presets.

  • The cluster consists of one master node and one worker node.
  • Kubernetes Control Plane and Deckhouse controller run on the master node.
  • Deckhouse deploys other components (Ingress Controller, Prometheus, cert-manager, etc.) on the worker node.
  • Your applications should run on the worker node.
  • Highly Available Kubernetes Control Plane.
  • The cluster consists of three master nodes and two worker nodes.
  • Kubernetes Control Plane and Deckhouse controller run on master nodes.
  • Deckhouse deploys other components (Ingress Controller, Prometheus, cert-manager, etc.) on the worker nodes.
  • Your applications should run on the worker nodes.
  • Highly Available Kubernetes Control Plane.
  • The cluster consists of three master nodes, two system nodes, several frontend nodes, and one worker node.
  • Kubernetes Control Plane and Deckhouse controller run on master nodes.
  • Deckhouse deploys system components (Prometheus, cert-manager, etc.) on system nodes.
  • Deckhouse deploys Ingress Controller on frontend nodes. The number of frontend nodes depends on the number of availability zones in a cloud provider.
  • Your applications should run on the worker node.