Deckhouse Platform on OpenStack
Deckhouse Platform Enterprise Edition license key
The license key is used by Deckhouse components to access the geo-distributed container registry, where all images used by the Deckhouse are stored.
The commands and configuration files on this page are generated using the license key you entered.
Request access
Fill out this form and we will send you access credentials via email.
Enter license key
The recommended settings for a Deckhouse Platform Enterprise Edition installation are generated below:
config.yml
— a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.resources.yml
— description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).
Please pay attention to:
- highlighted parameters you must define.
- parameters you might want to change.
The other available cloud provider related options are described in the documentation.
To learn more about the Deckhouse Platform release channels, please see the relevant documentation.
# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
# type of the cloud provider
provider: OpenStack
# prefix to differentiate cluster objects (can be used, e.g., in routing)
prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.21"
# cluster domain (used for local routing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
# address of the Docker registry where the Deckhouse images are located
imagesRepo: registry.deckhouse.io/deckhouse/ee
# a special string with your token to access Docker registry (generated automatically for your license token)
registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
# the release channel in use
releaseChannel: Stable
configOverrides:
global:
modules:
# template that will be used for system apps domains within the cluster
# e.g., Grafana for %s.example.com will be available as grafana.example.com
publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: OpenStackClusterConfiguration
# pre-defined layout from Deckhouse
layout: Standard
# standard layout specific settings
standard:
# network name for external communication
externalNetworkName: *!CHANGE_EXT_NET*
# addressing for the internal network of the cluster nodes
internalNetworkCIDR: 192.168.198.0/24
# a list of recursive DNS addresses of the internal network
# you might consider changing this
internalNetworkDNSServers:
- 8.8.8.8
- 8.8.4.4
# a flag that determines whether SecurityGroups and AllowedAddressPairs should be configured on internal network ports
internalNetworkSecurity: true
provider:
authURL: *!CHANGE_API_URL*
# you might consider changing this
domainName: users
password: *!CHANGE_PASSWORD*
# you might consider changing this
region: RegionOne
tenantID: *!CHANGE_PROJECT_ID*
username: *!CHANGE_USERNAME*
masterNodeGroup:
# number of replicas
# if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
replicas: 1
# disk type
volumeTypeMap:
# <availability zone>: <volume type>
# you might consider changing this
DP1: dp1-high-iops
# Parameters of the VM image
instanceClass:
# flavor in use
# you might consider changing this
flavorName: Standard-2-8-50
# VM image in use
# you might consider changing this
imageName: ubuntu-18-04-cloud-amd64
# disk size for the root FS
rootDiskSize: 40
# ssh public key for access to nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
Deckhouse Platform Enterprise Edition license key
The license key is used by Deckhouse components to access the geo-distributed container registry, where all images used by the Deckhouse are stored.
The commands and configuration files on this page are generated using the license key you entered.
Request access
Fill out this form and we will send you access credentials via email.
Enter license key
The recommended settings for a Deckhouse Platform Enterprise Edition installation are generated below:
config.yml
— a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.resources.yml
— description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).
Please pay attention to:
- highlighted parameters you must define.
- parameters you might want to change.
The other available cloud provider related options are described in the documentation.
To learn more about the Deckhouse Platform release channels, please see the relevant documentation.
# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
# type of the cloud provider
provider: OpenStack
# prefix to differentiate cluster objects (can be used, e.g., in routing)
prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.21"
# cluster domain (used for local routing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
# address of the Docker registry where the Deckhouse images are located
imagesRepo: registry.deckhouse.io/deckhouse/ee
# a special string with your token to access Docker registry (generated automatically for your license token)
registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
# the release channel in use
releaseChannel: Stable
configOverrides:
global:
modules:
# template that will be used for system apps domains within the cluster
# e.g., Grafana for %s.example.com will be available as grafana.example.com
publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: OpenStackClusterConfiguration
# pre-defined layout from Deckhouse
layout: SimpleWithInternalNetwork
# standard layout specific settings
simpleWithInternalNetwork:
# the name of the subnet in which the cluster nodes will run
internalSubnetName: *!CHANGE_INTERNAL_NET*
# defines the way traffic is organized on the network that is used for communication between Pods
# direct routing works between nodes, SecurityGroups are disabled in this mode.
podNetworkMode: DirectRoutingWithPortSecurityEnabled
# network name for external communication
externalNetworkName: *!CHANGE_EXT_NET*
# network name for external communication
masterWithExternalFloatingIP: true
# cloud access parameters
provider:
authURL: *!CHANGE_API_URL*
# you might consider changing this
domainName: users
password: *!CHANGE_PASSWORD*
# you might consider changing this
region: RegionOne
tenantID: *!CHANGE_TENANT_ID*
username: *!CHANGE_USERNAME*
masterNodeGroup:
# number of replicas
# if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
replicas: 1
# disk type
volumeTypeMap:
# <availability zone>: <volume type>
# you might consider changing this
DP1: dp1-high-iops
# Parameters of the VM image
instanceClass:
# flavor in use
# you might consider changing this
flavorName: Standard-2-8-50
# VM image in use
# you might consider changing this
imageName: ubuntu-18-04-cloud-amd64
# disk size for the root FS
rootDiskSize: 40
# ssh public key for access to nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
Resources for the “Minimal” preset.
# section containing the parameters of instance class for worker nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: OpenStackInstanceClass
metadata:
# name of instance class
name: worker
spec:
# flavor in use for this instance class
# you might consider changing this
flavorName: Standard-2-4-50
rootDiskSize: 30
# VM image in use
# you might consider changing this
imageName: ubuntu-18-04-cloud-amd64
---
# section containing the parameters of worker node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
# name of node group
name: worker
spec:
nodeType: CloudEphemeral
# parameters for provisioning the cloud-based VMs
cloudInstances:
# the reference to the InstanceClass object
classReference:
kind: OpenStackInstanceClass
name: worker
# the maximum number of instances for the group in each zone
maxPerZone: 1
# the minimum number of instances for the group in each zone
minPerZone: 1
# list of availability zones to create instances in
# you might consider changing this
zones:
- DP1
disruptions:
approvalMode: Automatic
---
# section containing the parameters of nginx ingress controller
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: nginx
spec:
# the name of the Ingress class to use with the Ingress nginx controller
ingressClass: nginx
# the way traffic goes to cluster from the outer network
inlet: LoadBalancer
# describes on which nodes the component will be located. Label node.deckhouse.io/group: <NAME_GROUP_NAME> is set automatically.
nodeSelector:
node.deckhouse.io/group: worker
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
name: admin
spec:
# Kubernetes RBAC accounts list
subjects:
- kind: User
name: admin@example.com
# pre-defined access template
accessLevel: SuperAdmin
# allow user to do kubectl port-forward
portForwarding: true
---
# section containing the parameters of the static user
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: User
metadata:
name: admin
spec:
# user e-mail
email: admin@example.com
# this is a hash of the password <GENERATED_PASSWORD>, generated now
# generate your own or use it at your own risk (for testing purposes)
# echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2
# you might consider changing this
password: <GENERATED_PASSWORD_HASH>
Resources for the “Multi-master” preset.
# section containing the parameters of instance class for worker nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: OpenStackInstanceClass
metadata:
# name of instance class
name: worker
spec:
# flavor in use for this instance class
# you might consider changing this
flavorName: Standard-2-4-50
rootDiskSize: 30
# VM image in use
# you might consider changing this
imageName: ubuntu-18-04-cloud-amd64
---
# section containing the parameters of worker node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
# name of node group
name: worker
spec:
nodeType: CloudEphemeral
# parameters for provisioning the cloud-based VMs
cloudInstances:
# the reference to the InstanceClass object
classReference:
kind: OpenStackInstanceClass
name: worker
# the maximum number of instances for the group in each zone
maxPerZone: 2
# the minimum number of instances for the group in each zone
minPerZone: 2
# list of availability zones to create instances in
# you might consider changing this
zones:
- DP1
disruptions:
approvalMode: Automatic
---
# section containing the parameters of nginx ingress controller
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: nginx
spec:
# the name of the Ingress class to use with the Ingress nginx controller
ingressClass: nginx
# the way traffic goes to cluster from the outer network
inlet: LoadBalancer
# describes on which nodes the component will be located. Label node.deckhouse.io/group: <NAME_GROUP_NAME> is set automatically.
nodeSelector:
node.deckhouse.io/group: worker
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
name: admin
spec:
# Kubernetes RBAC accounts list
subjects:
- kind: User
name: admin@example.com
# pre-defined access template
accessLevel: SuperAdmin
# allow user to do kubectl port-forward
portForwarding: true
---
# section containing the parameters of the static user
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: User
metadata:
name: admin
spec:
# user e-mail
email: admin@example.com
# this is a hash of the password <GENERATED_PASSWORD>, generated now
# generate your own or use it at your own risk (for testing purposes)
# echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2
# you might consider changing this
password: <GENERATED_PASSWORD_HASH>
Resources for the “Recommended for production” preset.
# section containing the parameters of instance class for system nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: OpenStackInstanceClass
metadata:
# name of instance class
name: system
spec:
# flavor in use for this instance class
# you might consider changing this
flavorName: Standard-2-4-50
rootDiskSize: 30
# VM image in use
# you might consider changing this
imageName: ubuntu-18-04-cloud-amd64
---
# section containing the parameters of system node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: system
spec:
nodeType: CloudEphemeral
# parameters for provisioning the cloud-based VMs
cloudInstances:
# the reference to the InstanceClass object
classReference:
kind: OpenStackInstanceClass
name: system
# the maximum number of instances for the group in each zone
maxPerZone: 2
# the minimum number of instances for the group in each zone
minPerZone: 2
# list of availability zones to create instances in
# you might consider changing this
zones:
- DP1
disruptions:
approvalMode: Automatic
nodeTemplate:
# similar to the standard metadata.labels field
labels:
node-role.deckhouse.io/system: ""
# similar to the .spec.taints field of the Node object
# only effect, key, value fields are available
taints:
- effect: NoExecute
key: dedicated.deckhouse.io
value: system
---
# section containing the parameters of instance class for frontend nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: OpenStackInstanceClass
metadata:
# name of instance class
name: frontend
spec:
# you might consider changing this
flavorName: Standard-2-4-50
rootDiskSize: 30
# you might consider changing this
imageName: ubuntu-18-04-cloud-amd64
---
# section containing the parameters of frontend node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: frontend
spec:
nodeType: CloudEphemeral
cloudInstances:
classReference:
kind: OpenStackInstanceClass
name: system
maxPerZone: 2
minPerZone: 2
zones:
- DP1
disruptions:
approvalMode: Automatic
nodeTemplate:
labels:
node-role.deckhouse.io/frontend: ""
taints:
- effect: NoExecute
key: dedicated.deckhouse.io
value: frontend
---
# section containing the parameters of instance class for worker nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: OpenStackInstanceClass
metadata:
name: worker
spec:
# you might consider changing this
flavorName: Standard-2-4-50
rootDiskSize: 30
# you might consider changing this
imageName: ubuntu-18-04-cloud-amd64
---
# section containing the parameters of worker node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
nodeType: CloudEphemeral
cloudInstances:
classReference:
kind: OpenStackInstanceClass
name: worker
maxPerZone: 1
minPerZone: 1
# you might consider changing this
zones:
- DP1
disruptions:
approvalMode: Automatic
---
# section containing the parameters of nginx ingress controller
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: nginx
spec:
# the name of the Ingress class to use with the Ingress nginx controller
ingressClass: nginx
# the way traffic goes to cluster from the outer network
inlet: LoadBalancer
# describes on which nodes the component will be located
nodeSelector:
node-role.deckhouse.io/frontend: ""
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
name: admin
spec:
# Kubernetes RBAC accounts list
subjects:
- kind: User
name: admin@example.com
# pre-defined access template
accessLevel: SuperAdmin
# allow user to do kubectl port-forward
portForwarding: true
---
# section containing the parameters of the static user
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: User
metadata:
name: admin
spec:
# user e-mail
email: admin@example.com
# this is a hash of the password <GENERATED_PASSWORD>, generated now
# generate your own or use it at your own risk (for testing purposes)
# echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2
# you might consider changing this
password: <GENERATED_PASSWORD_HASH>
Use a Docker image to install the Deckhouse Platform. It is necessary to transfer configuration files to the container as well as SSH keys for accessing the master nodes.
Run the installer on the personal computer.
echo <LICENSE_TOKEN> | docker login -u license-token --password-stdin registry.deckhouse.io
docker run --pull=always -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \
-v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash
Log in on the personal computer to the container image registry by providing the license key as a password:
docker login -u license-token registry.deckhouse.io
Run a container with the installer:
docker run --pull=always -it -v "%cd%\config.yml:/config.yml" -v "%userprofile%\.ssh\:/tmp/.ssh/" -v "%cd%\resources.yml:/resources.yml" -v "%cd%\dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash -c "chmod 400 /tmp/.ssh/id_rsa; bash"
Now, to initiate the process of installation, you need to execute inside the container:
dhctl bootstrap --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml --resources=/resources.yml
The --ssh-user
parameter here refers to the default user for the relevant VM image. It is ubuntu
for the images suggested in this guide.
Notes:
-
The
-v "$PWD/dhctl-tmp:/tmp/dhctl"
parameter enables saving the state of the Terraform installer to a temporary directory on the startup host. It allows the installation to continue correctly in case of a failure of the installer’s container. If any problems occur, you can cancel the process of installation and remove all created objects using the following command (the configuration file should be the same you’ve used to initiate the installation):
dhctl bootstrap-phase abort --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml
dhctl bootstrap-phase abort --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml
After the installation is complete, you will be returned to the command line.
Almost everything is ready for a fully-fledged Deckhouse Platform to work!