Deckhouse Platform on VMware vSphere
Deckhouse Platform Enterprise Edition license key
The license key is used by Deckhouse components to access the geo-distributed container registry, where all images used by the Deckhouse are stored.
The commands and configuration files on this page are generated using the license key you entered.
Request access
Fill out this form and we will send you access credentials via email.
Enter license key
The recommended settings for a Deckhouse Platform Enterprise Edition installation are generated below:
config.yml
— a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.resources.yml
— description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).
Please pay attention to:
- highlighted parameters you must define.
- parameters you might want to change.
The other available cloud provider related options are described in the documentation.
To learn more about the Deckhouse Platform release channels, please see the relevant documentation.
# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
# type of the cloud provider
provider: vSphere
# prefix to differentiate cluster objects (can be used, e.g., in routing)
prefix: "cloud-demo"
# address space of the cluster's pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.21"
# cluster domain (used for local routing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
# address of the Docker registry where the Deckhouse images are located
imagesRepo: registry.deckhouse.io/deckhouse/ee
# a special string with your token to access Docker registry (generated automatically for your license token)
registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
# the release channel in use
releaseChannel: Stable
configOverrides:
global:
modules:
# template that will be used for system apps domains within the cluster
# e.g., Grafana for %s.example.com will be available as grafana.example.com
publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: VsphereClusterConfiguration
# pre-defined layout from Deckhouse
layout: Standard
# vCenter access parameters
provider:
server: *!CHANGE_SERVER*
username: *!CHANGE_USERNAME*
password: *!CHANGE_PASSWORD*
# Set to true if vCenter has a self-signed certificate,
# otherwise set false (or delete the string below with the insecure parameter).
insecure: *!CHANGE_INSECURE*
# path to a Folder in which VirtualMachines will be create
# the folder itself will be created by the Deckhouse Installer
vmFolderPath: *!CHANGE_FOLDER*
# region and zone tag category names
regionTagCategory: k8s-region
zoneTagCategory: k8s-zone
# region and zone tag names in which cluster will operate
region: *!CHANGE_REGION_TAG_NAME*
zones:
- *!CHANGE_ZONE_TAG_NAME*
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
# name of External Network which has access to the Internet
# ip addresses from External Network sets as ExternalIP of Node object
# optional parameter
externalNetworkNames:
- *!CHANGE_NETWORK_NAME*
# name of Internal Network that will be used for traffic between nodes
# ip addresses from Internal Network sets as InternalIP of Node object
# optional parameter
internalNetworkNames:
- *!CHANGE_NETWORK_NAME*
# address space of the cluster's nodes
internalNetworkCIDR: 10.90.0.0/24
masterNodeGroup:
# number of replicas
# if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
replicas: 1
# Parameters of the VM image
instanceClass:
numCPUs: 4
memory: 8192
rootDiskSize: 50
# The name of the image created in step 4 at the "Building a VM image" stage,
# taking into account the vCenter folder path. Example: "folder/my-ubuntu-packer-image".
template: *!CHANGE_TEMPLATE_NAME*
datastore: *!CHANGE_DATASTORE_NAME*
# main network connected to node
mainNetwork: *!CHANGE_NETWORK_NAME*
# additional networks connected to node
# optional parameter
additionalNetworks:
- *!CHANGE_NETWORK_NAME*
Resources for the “Minimal” preset.
---
# section containing the parameters of instance class for worker nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: VsphereInstanceClass
metadata:
# name of instance class
name: worker
spec:
numCPUs: 8
memory: 16384
# VM disk size
# you might consider changing this
rootDiskSize: 70
template: *!CHANGE_TEMPLATE_NAME*
---
# section containing the parameters of worker node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
# name of node group
name: worker
spec:
# parameters for provisioning the cloud-based VMs
cloudInstances:
# the reference to the InstanceClass object
classReference:
kind: VsphereInstanceClass
name: worker
# the maximum number of instances for the group in each zone
maxPerZone: 1
# the minimum number of instances for the group in each zone
minPerZone: 1
# list of availability zones to create instances in
zones:
- *!CHANGE_ZONE_TAG_NAME*
nodeType: CloudEphemeral
---
# section containing the parameters of nginx ingress controller
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: nginx
spec:
# the name of the Ingress class to use with the Ingress nginx controller
ingressClass: nginx
# the way traffic goes to cluster from the outer network
inlet: HostPort
hostPort:
httpPort: 80
httpsPort: 443
realIPHeader: X-Forwarded-For
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- operator: Exists
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
name: admin
spec:
# Kubernetes RBAC accounts list
subjects:
- kind: User
name: admin@example.com
# pre-defined access template
accessLevel: SuperAdmin
# allow user to do kubectl port-forward
portForwarding: true
---
# section containing the parameters of the static user
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: User
metadata:
name: admin
spec:
# user e-mail
email: admin@example.com
# this is a hash of the password <GENERATED_PASSWORD>, generated now
# generate your own or use it at your own risk (for testing purposes)
# echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2
# you might consider changing this
password: <GENERATED_PASSWORD_HASH>
Resources for the “Multi-master” preset.
---
# section containing the parameters of instance class for worker nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: VsphereInstanceClass
metadata:
# name of instance class
name: worker
spec:
numCPUs: 8
memory: 16384
# VM disk size
# you might consider changing this
rootDiskSize: 70
template: *!CHANGE_TEMPLATE_NAME*
---
# section containing the parameters of worker node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
# name of node group
name: worker
spec:
# parameters for provisioning the cloud-based VMs
cloudInstances:
# the reference to the InstanceClass object
classReference:
kind: VsphereInstanceClass
name: worker
# the maximum number of instances for the group in each zone
maxPerZone: 2
# the minimum number of instances for the group in each zone
minPerZone: 2
# list of availability zones to create instances in
zones:
- *!CHANGE_ZONE_TAG_NAME*
- *!CHANGE_ANOTHER_ZONE_TAG_NAME*
nodeType: CloudEphemeral
---
# section containing the parameters of instance class for frontend nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: VsphereInstanceClass
metadata:
# name of instance class
name: frontend
spec:
numCPUs: 4
memory: 8192
# VM disk size
# you might consider changing this
rootDiskSize: 50
template: *!CHANGE_TEMPLATE_NAME*
---
# section containing the parameters of frontend node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
# name of node group
name: frontend
spec:
# parameters for provisioning the cloud-based VMs
cloudInstances:
# the reference to the InstanceClass object
classReference:
kind: VsphereInstanceClass
name: frontend
# the maximum number of instances for the group in each zone
maxPerZone: 2
# the minimum number of instances for the group in each zone
minPerZone: 2
# list of availability zones to create instances in
# you might consider changing this
zones:
- *!CHANGE_ZONE_TAG_NAME*
- *!CHANGE_ANOTHER_ZONE_TAG_NAME*
nodeTemplate:
# similar to the standard metadata.labels field
labels:
node-role.deckhouse.io/frontend: ""
# similar to the .spec.taints field of the Node object
# only effect, key, value fields are available
taints:
- effect: NoExecute
key: dedicated.deckhouse.io
value: frontend
nodeType: CloudEphemeral
---
# section containing the parameters of nginx ingress controller
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: nginx
spec:
# the name of the Ingress class to use with the Ingress nginx controller
ingressClass: nginx
# the way traffic goes to cluster from the outer network
inlet: HostPort
hostPort:
httpPort: 80
httpsPort: 443
realIPHeader: X-Forwarded-For
nodeSelector:
node-role.kubernetes.io/frontend: ""
tolerations:
- operator: Exists
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
name: admin
spec:
# Kubernetes RBAC accounts list
subjects:
- kind: User
name: admin@example.com
# pre-defined access template
accessLevel: SuperAdmin
# allow user to do kubectl port-forward
portForwarding: true
---
# section containing the parameters of the static user
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: User
metadata:
name: admin
spec:
# user e-mail
email: admin@example.com
# this is a hash of the password <GENERATED_PASSWORD>, generated now
# generate your own or use it at your own risk (for testing purposes)
# echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2
# you might consider changing this
password: <GENERATED_PASSWORD_HASH>
Resources for the “Recommended for production” preset.
---
# section containing the parameters of instance class for worker nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: VsphereInstanceClass
metadata:
# name of instance class
name: worker
spec:
numCPUs: 8
memory: 16384
# VM disk size
# you might consider changing this
rootDiskSize: 70
template: *!CHANGE_TEMPLATE_NAME*
---
# section containing the parameters of worker node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
# name of node group
name: worker
spec:
# parameters for provisioning the cloud-based VMs
cloudInstances:
# the reference to the InstanceClass object
classReference:
kind: VsphereInstanceClass
name: worker
# the maximum number of instances for the group in each zone
maxPerZone: 1
# the minimum number of instances for the group in each zone
minPerZone: 1
# list of availability zones to create instances in
zones:
- *!CHANGE_ZONE_TAG_NAME*
- *!CHANGE_ANOTHER_ZONE_TAG_NAME*
nodeType: CloudEphemeral
---
# section containing the parameters of instance class for system nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: VsphereInstanceClass
metadata:
# name of instance class
name: system
spec:
numCPUs: 8
memory: 16384
# VM disk size
# you might consider changing this
rootDiskSize: 100
template: *!CHANGE_TEMPLATE_NAME*
---
# section containing the parameters of system node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
# name of node group
name: system
spec:
# parameters for provisioning the cloud-based VMs
cloudInstances:
# the reference to the InstanceClass object
classReference:
kind: VsphereInstanceClass
name: system
# the maximum number of instances for the group in each zone
maxPerZone: 2
# the minimum number of instances for the group in each zone
minPerZone: 1
# list of availability zones to create instances in
# you might consider changing this
zones:
- *!CHANGE_ZONE_TAG_NAME*
- *!CHANGE_ANOTHER_ZONE_TAG_NAME*
nodeTemplate:
# similar to the standard metadata.labels field
labels:
node-role.deckhouse.io/system: ""
# similar to the .spec.taints field of the Node object
# only effect, key, value fields are available
taints:
- effect: NoExecute
key: dedicated.deckhouse.io
value: system
nodeType: CloudEphemeral
---
# section containing the parameters of instance class for frontend nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: VsphereInstanceClass
metadata:
# name of instance class
name: frontend
spec:
numCPUs: 4
memory: 8192
# VM disk size
# you might consider changing this
rootDiskSize: 50
template: *!CHANGE_TEMPLATE_NAME*
---
# section containing the parameters of frontend node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
# name of node group
name: frontend
spec:
# parameters for provisioning the cloud-based VMs
cloudInstances:
# the reference to the InstanceClass object
classReference:
kind: VsphereInstanceClass
name: frontend
# the maximum number of instances for the group in each zone
maxPerZone: 2
# the minimum number of instances for the group in each zone
minPerZone: 3
# list of availability zones to create instances in
# you might consider changing this
zones:
- *!CHANGE_ZONE_TAG_NAME*
- *!CHANGE_ANOTHER_ZONE_TAG_NAME*
nodeTemplate:
# similar to the standard metadata.labels field
labels:
node-role.deckhouse.io/frontend: ""
# similar to the .spec.taints field of the Node object
# only effect, key, value fields are available
taints:
- effect: NoExecute
key: dedicated.deckhouse.io
value: frontend
nodeType: CloudEphemeral
---
# section containing the parameters of nginx ingress controller
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: nginx
spec:
# the name of the Ingress class to use with the Ingress nginx controller
ingressClass: nginx
# the way traffic goes to cluster from the outer network
inlet: HostPort
hostPort:
httpPort: 80
httpsPort: 443
realIPHeader: X-Forwarded-For
nodeSelector:
node-role.kubernetes.io/frontend: ""
tolerations:
- operator: Exists
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
name: admin
spec:
# Kubernetes RBAC accounts list
subjects:
- kind: User
name: admin@example.com
# pre-defined access template
accessLevel: SuperAdmin
# allow user to do kubectl port-forward
portForwarding: true
---
# section containing the parameters of the static user
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: User
metadata:
name: admin
spec:
# user e-mail
email: admin@example.com
# this is a hash of the password <GENERATED_PASSWORD>, generated now
# generate your own or use it at your own risk (for testing purposes)
# echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2
# you might consider changing this
password: <GENERATED_PASSWORD_HASH>
Use a Docker image to install the Deckhouse Platform. It is necessary to transfer configuration files to the container as well as SSH keys for accessing the master nodes.
Run the installer on the personal computer.
echo <LICENSE_TOKEN> | docker login -u license-token --password-stdin registry.deckhouse.io
docker run --pull=always -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \
-v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash
Log in on the personal computer to the container image registry by providing the license key as a password:
docker login -u license-token registry.deckhouse.io
Run a container with the installer:
docker run --pull=always -it -v "%cd%\config.yml:/config.yml" -v "%userprofile%\.ssh\:/tmp/.ssh/" -v "%cd%\resources.yml:/resources.yml" -v "%cd%\dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash -c "chmod 400 /tmp/.ssh/id_rsa; bash"
Now, to initiate the process of installation, you need to execute inside the container:
dhctl bootstrap --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml --resources=/resources.yml
The --ssh-user
parameter here refers to the default user for the relevant VM image. It is ubuntu
for the images suggested in this guide.
Notes:
-
The
-v "$PWD/dhctl-tmp:/tmp/dhctl"
parameter enables saving the state of the Terraform installer to a temporary directory on the startup host. It allows the installation to continue correctly in case of a failure of the installer’s container. If any problems occur, you can cancel the process of installation and remove all created objects using the following command (the configuration file should be the same you’ve used to initiate the installation):
dhctl bootstrap-phase abort --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml
dhctl bootstrap-phase abort --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml
After the installation is complete, you will be returned to the command line.
Almost everything is ready for a fully-fledged Deckhouse Platform to work!