Deckhouse Kubernetes Platform on VMware Cloud Director

Enter license key

Enter

Have no key?

The recommended settings for a Deckhouse Kubernetes Platform Enterprise Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

Create the config.yml file.

# General cluster parameters.
# https://deckhouse.io/documentation/v1/installing/configuration.html#clusterconfiguration
apiVersion: deckhouse.io/v1
kind: ClusterConfiguration
clusterType: Cloud
cloud:
  provider: VCD
  # A prefix of objects that are created in the cloud during the installation.
  # You might consider changing this.
  prefix: cloud-demo
# Address space of the cluster's Pods.
podSubnetCIDR: 10.111.0.0/16
# Address space of the cluster's services.
serviceSubnetCIDR: 10.222.0.0/16
kubernetesVersion: "Automatic"
clusterDomain: "cluster.local"
---
# Settings for the bootstrapping the Deckhouse cluster
# https://deckhouse.io/documentation/v1/installing/configuration.html#initconfiguration
apiVersion: deckhouse.io/v1
kind: InitConfiguration
deckhouse:
  imagesRepo: registry.deckhouse.ru/deckhouse/ee
  # A special string with your token to access Docker registry (generated automatically for your license token).
  registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
---
# Deckhouse module settings.
# https://deckhouse.io/documentation/v1/modules/002-deckhouse/configuration.html
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: deckhouse
spec:
  version: 1
  enabled: true
  settings:
    bundle: Default
    releaseChannel: Stable
    logLevel: Info
---
# Global Deckhouse settings.
# https://deckhouse.ru/documentation/v1/deckhouse-configure-global.html#parameters
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: global
spec:
  version: 1
  settings:
    modules:
      # Template that will be used for system apps domains within the cluster.
      # E.g., Grafana for %s.example.com will be available as 'grafana.example.com'.
      # The domain MUST NOT match the one specified in the clusterDomain parameter of the ClusterConfiguration resource.
      # You can change it to your own or follow the steps in the guide and change it after installation.
      publicDomainTemplate: "%s.example.com"
---
# user-authn module settings.
# https://deckhouse.io/documentation/v1/modules/150-user-authn/configuration.html
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: user-authn
spec:
  version: 1
  enabled: true
  settings:
    controlPlaneConfigurator:
      dexCAMode: DoNotNeed
    # Enabling access to the API server through Ingress.
    # https://deckhouse.io/documentation/v1/modules/150-user-authn/configuration.html#parameters-publishapi
    publishAPI:
      enable: true
      https:
        mode: Global
        global:
          kubeconfigGeneratorMasterCA: ""
---
# cni-cilium module settings.
# https://deckhouse.io/documentation/v1/modules/021-cni-cilium/configuration.html
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: cni-cilium
spec:
  enabled: true
---
# metallb module settings.
# https://deckhouse.io/documentation/v1/modules/380-metallb/configuration.html
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: metallb
spec:
  version: 1
  enabled: true
  settings:
      # Metallb pool settings for incoming traffic.
      # https://deckhouse.io/documentation/v1/modules/380-metallb/configuration.html#parameters-addresspools
    addressPools:
    - addresses:
      - *!CHANGE_SUBNET_METALLB_POOL*.10/32
      name: frontend-pool
      protocol: layer2
    # Speaker placement on frontend hosts.
    # https://deckhouse.io/documentation/v1/modules/380-metallb/configuration.html#parameters-speaker
    speaker:
      nodeSelector:
        node-role.deckhouse.io/frontend: ""
      tolerations:
      - effect: NoExecute
        key: dedicated.deckhouse.io
        operator: Equal
        value: frontend
---
# Cloud provider settings.
# https://deckhouse.io/documentation/v1/modules/030-cloud-provider-vsphere/cluster_configuration.html
apiVersion: deckhouse.io/v1
internalNetworkCIDR: 10.15.11.0/24
kind: VCDClusterConfiguration
layout: Standard
masterNodeGroup:
  instanceClass:
    etcdDiskSizeGb: 10
    # List of IP addresses for control-plane node.
    # We recommend using .2, .3 .4 addresses, for example: 10.15.11.2.
    mainNetworkIPAddresses:
    - !CHANGE_SUBNET*.2
    rootDiskSizeGb: 40
    sizingPolicy: *!CHANGE_SIZING_POLICY*
    storageProfile: *!CHANGE_STORAGE_PROFILE*
    # The name of the image, taking into account the vCloudDirector catalog path.
    # Example: "catalog/ubuntu-jammy-22.04".
    template: *!CHANGE_TEMPLATE_NAME*
  replicas: 1
nodeGroups:
- instanceClass:
    mainNetworkIPAddresses:
    - !CHANGE_SUBNET*.11
    rootDiskSizeGb: 40
    sizingPolicy: *!CHANGE_SIZING_POLICY*
    storageProfile: *!CHANGE_STORAGE_PROFILE*
    # The name of the image, taking into account the vCloudDirector catalog path.
    # Example: "catalog/ubuntu-jammy-22.04".
    template: *!CHANGE_TEMPLATE_NAME*
  name: frontend
  replicas: 1
  nodeTemplate:
    labels:
      node-role.deckhouse.io/frontend: ""
    taints:
    - effect: NoExecute
      key: dedicated.deckhouse.io
      value: frontend
# vCloud Director API access parameters
provider:
  server: *!CHANGE_SERVER*
  username: *!CHANGE_USERNAME*
  password: *!CHANGE_PASSWORD*
  # Set to true if vCloud Director has a self-signed certificate,
  # otherwise set false (or delete the string below with the insecure parameter).
  insecure: *!CHANGE_INSECURE*
organization: *!CHANGE_ORG*
virtualApplicationName: *!CHANGE_VAPP*
virtualDataCenter: *!CHANGE__DC*
# Внутренняя сеть узлов.
mainNetwork: *!CHANGE_MAIN_NETWORK*
# Public SSH key for accessing cloud nodes.
# This key will be added to the user on created nodes (the user name depends on the image used).
sshPublicKey: *!CHANGE_SSH_KEY*
# General cluster parameters. # https://deckhouse.io/documentation/v1/installing/configuration.html#clusterconfiguration apiVersion: deckhouse.io/v1 kind: ClusterConfiguration clusterType: Cloud cloud: provider: VCD # A prefix of objects that are created in the cloud during the installation. # You might consider changing this. prefix: cloud-demo # Address space of the cluster's Pods. podSubnetCIDR: 10.111.0.0/16 # Address space of the cluster's services. serviceSubnetCIDR: 10.222.0.0/16 kubernetesVersion: "Automatic" clusterDomain: "cluster.local" --- # Settings for the bootstrapping the Deckhouse cluster # https://deckhouse.io/documentation/v1/installing/configuration.html#initconfiguration apiVersion: deckhouse.io/v1 kind: InitConfiguration deckhouse: imagesRepo: registry.deckhouse.ru/deckhouse/ee # A special string with your token to access Docker registry (generated automatically for your license token). registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE> --- # Deckhouse module settings. # https://deckhouse.io/documentation/v1/modules/002-deckhouse/configuration.html apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: deckhouse spec: version: 1 enabled: true settings: bundle: Default releaseChannel: Stable logLevel: Info --- # Global Deckhouse settings. # https://deckhouse.ru/documentation/v1/deckhouse-configure-global.html#parameters apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: global spec: version: 1 settings: modules: # Template that will be used for system apps domains within the cluster. # E.g., Grafana for %s.example.com will be available as 'grafana.example.com'. # The domain MUST NOT match the one specified in the clusterDomain parameter of the ClusterConfiguration resource. # You can change it to your own or follow the steps in the guide and change it after installation. publicDomainTemplate: "%s.example.com" --- # user-authn module settings. # https://deckhouse.io/documentation/v1/modules/150-user-authn/configuration.html apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: user-authn spec: version: 1 enabled: true settings: controlPlaneConfigurator: dexCAMode: DoNotNeed # Enabling access to the API server through Ingress. # https://deckhouse.io/documentation/v1/modules/150-user-authn/configuration.html#parameters-publishapi publishAPI: enable: true https: mode: Global global: kubeconfigGeneratorMasterCA: "" --- # cni-cilium module settings. # https://deckhouse.io/documentation/v1/modules/021-cni-cilium/configuration.html apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: cni-cilium spec: enabled: true --- # metallb module settings. # https://deckhouse.io/documentation/v1/modules/380-metallb/configuration.html apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: metallb spec: version: 1 enabled: true settings: # Metallb pool settings for incoming traffic. # https://deckhouse.io/documentation/v1/modules/380-metallb/configuration.html#parameters-addresspools addressPools: - addresses: - *!CHANGE_SUBNET_METALLB_POOL*.10/32 name: frontend-pool protocol: layer2 # Speaker placement on frontend hosts. # https://deckhouse.io/documentation/v1/modules/380-metallb/configuration.html#parameters-speaker speaker: nodeSelector: node-role.deckhouse.io/frontend: "" tolerations: - effect: NoExecute key: dedicated.deckhouse.io operator: Equal value: frontend --- # Cloud provider settings. # https://deckhouse.io/documentation/v1/modules/030-cloud-provider-vsphere/cluster_configuration.html apiVersion: deckhouse.io/v1 internalNetworkCIDR: 10.15.11.0/24 kind: VCDClusterConfiguration layout: Standard masterNodeGroup: instanceClass: etcdDiskSizeGb: 10 # List of IP addresses for control-plane node. # We recommend using .2, .3 .4 addresses, for example: 10.15.11.2. mainNetworkIPAddresses: - !CHANGE_SUBNET*.2 rootDiskSizeGb: 40 sizingPolicy: *!CHANGE_SIZING_POLICY* storageProfile: *!CHANGE_STORAGE_PROFILE* # The name of the image, taking into account the vCloudDirector catalog path. # Example: "catalog/ubuntu-jammy-22.04". template: *!CHANGE_TEMPLATE_NAME* replicas: 1 nodeGroups: - instanceClass: mainNetworkIPAddresses: - !CHANGE_SUBNET*.11 rootDiskSizeGb: 40 sizingPolicy: *!CHANGE_SIZING_POLICY* storageProfile: *!CHANGE_STORAGE_PROFILE* # The name of the image, taking into account the vCloudDirector catalog path. # Example: "catalog/ubuntu-jammy-22.04". template: *!CHANGE_TEMPLATE_NAME* name: frontend replicas: 1 nodeTemplate: labels: node-role.deckhouse.io/frontend: "" taints: - effect: NoExecute key: dedicated.deckhouse.io value: frontend # vCloud Director API access parameters provider: server: *!CHANGE_SERVER* username: *!CHANGE_USERNAME* password: *!CHANGE_PASSWORD* # Set to true if vCloud Director has a self-signed certificate, # otherwise set false (or delete the string below with the insecure parameter). insecure: *!CHANGE_INSECURE* organization: *!CHANGE_ORG* virtualApplicationName: *!CHANGE_VAPP* virtualDataCenter: *!CHANGE__DC* # Внутренняя сеть узлов. mainNetwork: *!CHANGE_MAIN_NETWORK* # Public SSH key for accessing cloud nodes. # This key will be added to the user on created nodes (the user name depends on the image used). sshPublicKey: *!CHANGE_SSH_KEY*

Create the resources.yml file.

# Section containing the parameters of instance class for worker nodes.
# hhttps://deckhouse.io/documentation/v1/modules/030-cloud-provider-vcd/cr.html
apiVersion: deckhouse.io/v1
kind: VCDInstanceClass
metadata:
  name: worker
spec:
  rootDiskSizeGb: 50
  sizingPolicy: *!CHANGE_SIZING_POLICY*
  storageProfile: *!CHANGE_STORAGE_PROFILE*
  # The name of the image, taking into account the vCloudDirector without catalog path.
  # Example: "ubuntu-jammy-22.04".
  template: *!CHANGE_TEMPLATE_NAME*
---
# Section containing the parameters of worker node group.
# https://deckhouse.io/documentation/v1/modules/040-node-manager/cr.html#nodegroup
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: worker
spec:
  cloudInstances:
    classReference:
      kind: VCDInstanceClass
      name: worker
    maxPerZone: 2
    maxSurgePerZone: 0
    maxUnavailablePerZone: 0
    minPerZone: 1
  nodeType: CloudEphemeral
---
# Section containing the parameters of NGINX Ingress controller.
# https://deckhouse.io/documentation/v1/modules/402-ingress-nginx/cr.html
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
  name: nginx
spec:
  ingressClass: nginx
  # The way traffic goes to cluster from the outer network.
  inlet: HostPort
  hostPort:
    httpPort: 80
    httpsPort: 443
    realIPHeader: X-Forwarded-For
  nodeSelector:
    node-role.kubernetes.io/control-plane: ""
  tolerations:
  - operator: Exists
---
# RBAC and authorization settings.
# https://deckhouse.io/documentation/v1/modules/140-user-authz/cr.html#clusterauthorizationrule
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
  name: admin
spec:
  subjects:
  - kind: User
    name: admin@deckhouse.io
  accessLevel: SuperAdmin
  portForwarding: true
---
# Parameters of the static user.
# https://deckhouse.io/documentation/v1/modules/150-user-authn/cr.html#user
apiVersion: deckhouse.io/v1
kind: User
metadata:
  name: admin
spec:
  # User e-mail.
  email: admin@deckhouse.io
  # This is a hash of the password <GENERATED_PASSWORD>, generated when loading the page of the Getting Started.
  # Generate your own or use it at your own risk (for testing purposes)
  # echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2 | base64 -w0
  # You might consider changing this.
  password: <GENERATED_PASSWORD_HASH>
# Section containing the parameters of instance class for worker nodes. # hhttps://deckhouse.io/documentation/v1/modules/030-cloud-provider-vcd/cr.html apiVersion: deckhouse.io/v1 kind: VCDInstanceClass metadata: name: worker spec: rootDiskSizeGb: 50 sizingPolicy: *!CHANGE_SIZING_POLICY* storageProfile: *!CHANGE_STORAGE_PROFILE* # The name of the image, taking into account the vCloudDirector without catalog path. # Example: "ubuntu-jammy-22.04". template: *!CHANGE_TEMPLATE_NAME* --- # Section containing the parameters of worker node group. # https://deckhouse.io/documentation/v1/modules/040-node-manager/cr.html#nodegroup apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: cloudInstances: classReference: kind: VCDInstanceClass name: worker maxPerZone: 2 maxSurgePerZone: 0 maxUnavailablePerZone: 0 minPerZone: 1 nodeType: CloudEphemeral --- # Section containing the parameters of NGINX Ingress controller. # https://deckhouse.io/documentation/v1/modules/402-ingress-nginx/cr.html apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: ingressClass: nginx # The way traffic goes to cluster from the outer network. inlet: HostPort hostPort: httpPort: 80 httpsPort: 443 realIPHeader: X-Forwarded-For nodeSelector: node-role.kubernetes.io/control-plane: "" tolerations: - operator: Exists --- # RBAC and authorization settings. # https://deckhouse.io/documentation/v1/modules/140-user-authz/cr.html#clusterauthorizationrule apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: subjects: - kind: User name: admin@deckhouse.io accessLevel: SuperAdmin portForwarding: true --- # Parameters of the static user. # https://deckhouse.io/documentation/v1/modules/150-user-authn/cr.html#user apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: # User e-mail. email: admin@deckhouse.io # This is a hash of the password <GENERATED_PASSWORD>, generated when loading the page of the Getting Started. # Generate your own or use it at your own risk (for testing purposes) # echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2 | base64 -w0 # You might consider changing this. password: <GENERATED_PASSWORD_HASH>

Use a Docker image to install the Deckhouse Kubernetes Platform. It is necessary to transfer configuration files to the container as well as SSH keys for accessing the master node (further, it is assumed that the SSH key ~/.ssh/id_rsa is used).

Run the installer on the personal computer.

Linux / macOS Windows

 echo <LICENSE_TOKEN> | docker login -u license-token --password-stdin registry.deckhouse.io
docker run --pull=always -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \
  -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash
echo <LICENSE_TOKEN> | docker login -u license-token --password-stdin registry.deckhouse.io docker run --pull=always -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \ -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash

Log in on the personal computer to the container image registry by providing the license key as a password:

docker login -u license-token registry.deckhouse.io
docker login -u license-token registry.deckhouse.io

Run a container with the installer:

docker run --pull=always -it -v "%cd%\config.yml:/config.yml" -v "%userprofile%\.ssh\:/tmp/.ssh/" -v "%cd%\resources.yml:/resources.yml" -v "%cd%\dhctl-tmp:/tmp/dhctl"  registry.deckhouse.io/deckhouse/ee/install:stable bash -c "chmod 400 /tmp/.ssh/id_rsa; bash"
docker run --pull=always -it -v "%cd%\config.yml:/config.yml" -v "%userprofile%\.ssh\:/tmp/.ssh/" -v "%cd%\resources.yml:/resources.yml" -v "%cd%\dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash -c "chmod 400 /tmp/.ssh/id_rsa; bash"

Now, to initiate the process of installation, you need to execute inside the container:

dhctl bootstrap --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml --resources=/resources.yml
dhctl bootstrap --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml --resources=/resources.yml

The --ssh-user parameter here refers to the default user for the relevant VM image. It is ubuntu for the image suggested in this guide.

If the installation was interrupted...

If the installation was interrupted (there were not enough quotas, network errors, etc.), you can restart it. The installation will continue correctly, no duplicate resources will be created in the cloud.

If the installation failed, and you need to delete the resources created in the cloud, run the following command:

  dhctl bootstrap-phase abort --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml

The installation process may take about from 5 to 30 minutes, depending on the connection.

After the installation is complete, the installer will output the IP of the master node (you will need it further). Example output:

...
┌ 🎈 ~ Common: Kubernetes Master Node addresses for SSH
│ cloud-demo-master-0 | ssh ubuntu@1.2.3.4
└ 🎈 ~ Common: Kubernetes Master Node addresses for SSH (0.00 seconds)

Almost everything is ready for a fully-fledged Deckhouse Kubernetes Platform to work!