Deckhouse Kubernetes Platform in Amazon AWS

Select the Deckhouse Kubernetes Platform revision

The recommended settings for a Deckhouse Kubernetes Platform Community Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

Create the config.yml file.

# General cluster parameters.
# https://deckhouse.io/documentation/v1/installing/configuration.html#clusterconfiguration
apiVersion: deckhouse.io/v1
kind: ClusterConfiguration
clusterType: Cloud
cloud:
  provider: AWS
  # A prefix of objects that are created in the cloud during the installation.
  # You might consider changing this.
  prefix: cloud-demo
# Address space of the cluster's Pods.
podSubnetCIDR: 10.111.0.0/16
# Address space of the cluster's services.
serviceSubnetCIDR: 10.222.0.0/16
kubernetesVersion: "Automatic"
# Cluster domain (used for local routing).
clusterDomain: "cluster.local"
---
# Deckhouse module settings.
# https://deckhouse.io/documentation/v1/modules/002-deckhouse/configuration.html
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: deckhouse
spec:
  version: 1
  enabled: true
  settings:
    bundle: Default
    # Deckhouse release channel. The Early Access channel is stable enough to be used in productive environments.
    # If you plan to use several clusters, it is recommended to use different release channels on them.
    # More info: https://deckhouse.io/documentation/v1/deckhouse-release-channels.html
    releaseChannel: EarlyAccess
    logLevel: Info
---
# Global Deckhouse settings.
# https://deckhouse.ru/documentation/v1/deckhouse-configure-global.html#parameters
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: global
spec:
  version: 1
  settings:
    modules:
      # Template that will be used for system apps domains within the cluster.
      # E.g., Grafana for %s.example.com will be available as 'grafana.example.com'.
      # The domain MUST NOT match the one specified in the clusterDomain parameter of the ClusterConfiguration resource.
      # You can change it to your own or follow the steps in the guide and change it after installation.
      publicDomainTemplate: "%s.example.com"
---
# user-authn module settings.
# https://deckhouse.io/documentation/v1/modules/150-user-authn/configuration.html
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: user-authn
spec:
  version: 1
  enabled: true
  settings:
    controlPlaneConfigurator:
      dexCAMode: DoNotNeed
    # Enabling access to the API server through Ingress.
    # https://deckhouse.io/documentation/v1/modules/150-user-authn/configuration.html#parameters-publishapi
    publishAPI:
      enable: true
      https:
        mode: Global
        global:
          kubeconfigGeneratorMasterCA: ""
---
# cni-cilium module settings.
# https://deckhouse.io/documentation/v1/modules/021-cni-cilium/configuration.html
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: cni-cilium
spec:
  version: 1
  # Enable cni-cilium module
  enabled: true
  settings:
    # cni-cilium module settings
    # https://deckhouse.io/documentation/v1/modules/021-cni-cilium/configuration.html
    tunnelMode: VXLAN
---
# Cloud provider settings.
# https://deckhouse.io/documentation/v1/modules/030-cloud-provider-aws/cluster_configuration.html
apiVersion: deckhouse.io/v1
kind: AWSClusterConfiguration
layout: WithoutNAT
# AWS EC2 access parameters.
provider:
  providerAccessKeyId: *!CHANGE_MYACCESSKEY*
  providerSecretAccessKey: *!CHANGE_mYsEcReTkEy*
  # Cluster region.
  # You might consider changing this.
  region: eu-central-1
masterNodeGroup:
  replicas: 1
  instanceClass:
    # Master node VM disk size.
    # You might consider changing this.
    diskSizeGb: 30
    # Master node VM disk type to use.
    # You might consider changing this.
    diskType: gp3
    # Type of the instance.
    # You might consider changing this.
    instanceType: c5.xlarge
    # Amazon Machine Image ID.
    # The example uses the Ubuntu Server 22.04 image for the 'eu-central-1' region.
    # Change the AMI ID if you use a different region (the 'provider.region' parameter).
    # AMI Catalog in the AWS console: EC2 -> AMI Catalog.
    # You might consider changing this.
    ami: ami-0caef02b518350c8b
# Address space of the AWS cloud.
vpcNetworkCIDR: "10.241.0.0/16"
# Address space of the cluster's nodes.
nodeNetworkCIDR: "10.241.32.0/20"
# Public SSH key for accessing cloud nodes.
# This key will be added to the user on created nodes (the user name depends on the image used).
sshPublicKey: *!CHANGE_SSH_KEY*
# General cluster parameters. # https://deckhouse.io/documentation/v1/installing/configuration.html#clusterconfiguration apiVersion: deckhouse.io/v1 kind: ClusterConfiguration clusterType: Cloud cloud: provider: AWS # A prefix of objects that are created in the cloud during the installation. # You might consider changing this. prefix: cloud-demo # Address space of the cluster's Pods. podSubnetCIDR: 10.111.0.0/16 # Address space of the cluster's services. serviceSubnetCIDR: 10.222.0.0/16 kubernetesVersion: "Automatic" # Cluster domain (used for local routing). clusterDomain: "cluster.local" --- # Deckhouse module settings. # https://deckhouse.io/documentation/v1/modules/002-deckhouse/configuration.html apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: deckhouse spec: version: 1 enabled: true settings: bundle: Default # Deckhouse release channel. The Early Access channel is stable enough to be used in productive environments. # If you plan to use several clusters, it is recommended to use different release channels on them. # More info: https://deckhouse.io/documentation/v1/deckhouse-release-channels.html releaseChannel: EarlyAccess logLevel: Info --- # Global Deckhouse settings. # https://deckhouse.ru/documentation/v1/deckhouse-configure-global.html#parameters apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: global spec: version: 1 settings: modules: # Template that will be used for system apps domains within the cluster. # E.g., Grafana for %s.example.com will be available as 'grafana.example.com'. # The domain MUST NOT match the one specified in the clusterDomain parameter of the ClusterConfiguration resource. # You can change it to your own or follow the steps in the guide and change it after installation. publicDomainTemplate: "%s.example.com" --- # user-authn module settings. # https://deckhouse.io/documentation/v1/modules/150-user-authn/configuration.html apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: user-authn spec: version: 1 enabled: true settings: controlPlaneConfigurator: dexCAMode: DoNotNeed # Enabling access to the API server through Ingress. # https://deckhouse.io/documentation/v1/modules/150-user-authn/configuration.html#parameters-publishapi publishAPI: enable: true https: mode: Global global: kubeconfigGeneratorMasterCA: "" --- # cni-cilium module settings. # https://deckhouse.io/documentation/v1/modules/021-cni-cilium/configuration.html apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: cni-cilium spec: version: 1 # Enable cni-cilium module enabled: true settings: # cni-cilium module settings # https://deckhouse.io/documentation/v1/modules/021-cni-cilium/configuration.html tunnelMode: VXLAN --- # Cloud provider settings. # https://deckhouse.io/documentation/v1/modules/030-cloud-provider-aws/cluster_configuration.html apiVersion: deckhouse.io/v1 kind: AWSClusterConfiguration layout: WithoutNAT # AWS EC2 access parameters. provider: providerAccessKeyId: *!CHANGE_MYACCESSKEY* providerSecretAccessKey: *!CHANGE_mYsEcReTkEy* # Cluster region. # You might consider changing this. region: eu-central-1 masterNodeGroup: replicas: 1 instanceClass: # Master node VM disk size. # You might consider changing this. diskSizeGb: 30 # Master node VM disk type to use. # You might consider changing this. diskType: gp3 # Type of the instance. # You might consider changing this. instanceType: c5.xlarge # Amazon Machine Image ID. # The example uses the Ubuntu Server 22.04 image for the 'eu-central-1' region. # Change the AMI ID if you use a different region (the 'provider.region' parameter). # AMI Catalog in the AWS console: EC2 -> AMI Catalog. # You might consider changing this. ami: ami-0caef02b518350c8b # Address space of the AWS cloud. vpcNetworkCIDR: "10.241.0.0/16" # Address space of the cluster's nodes. nodeNetworkCIDR: "10.241.32.0/20" # Public SSH key for accessing cloud nodes. # This key will be added to the user on created nodes (the user name depends on the image used). sshPublicKey: *!CHANGE_SSH_KEY*

Enter license key

Enter

Have no key?

The recommended settings for a Deckhouse Kubernetes Platform Enterprise Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

Create the config.yml file.

# General cluster parameters.
# https://deckhouse.io/documentation/v1/installing/configuration.html#clusterconfiguration
apiVersion: deckhouse.io/v1
kind: ClusterConfiguration
clusterType: Cloud
cloud:
  provider: AWS
  # A prefix of objects that are created in the cloud during the installation.
  # You might consider changing this.
  prefix: cloud-demo
# Address space of the cluster's Pods.
podSubnetCIDR: 10.111.0.0/16
# Address space of the cluster's services.
serviceSubnetCIDR: 10.222.0.0/16
kubernetesVersion: "Automatic"
# Cluster domain (used for local routing).
clusterDomain: "cluster.local"
---
# Settings for the bootstrapping the Deckhouse cluster
# https://deckhouse.io/documentation/v1/installing/configuration.html#initconfiguration
apiVersion: deckhouse.io/v1
kind: InitConfiguration
deckhouse:
  # Address of the Docker registry where the Deckhouse images are located
  imagesRepo: registry.deckhouse.io/deckhouse/ee
  # A special string with your token to access Docker registry (generated automatically for your license token)
  registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
---
# Deckhouse module settings.
# https://deckhouse.io/documentation/v1/modules/002-deckhouse/configuration.html
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: deckhouse
spec:
  version: 1
  enabled: true
  settings:
    bundle: Default
    releaseChannel: Stable
    logLevel: Info
---
# Global Deckhouse settings.
# https://deckhouse.ru/documentation/v1/deckhouse-configure-global.html#parameters
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: global
spec:
  version: 1
  settings:
    modules:
      # Template that will be used for system apps domains within the cluster.
      # E.g., Grafana for %s.example.com will be available as 'grafana.example.com'.
      # The domain MUST NOT match the one specified in the clusterDomain parameter of the ClusterConfiguration resource.
      # You can change it to your own or follow the steps in the guide and change it after installation.
      publicDomainTemplate: "%s.example.com"
---
# user-authn module settings.
# https://deckhouse.io/documentation/v1/modules/150-user-authn/configuration.html
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: user-authn
spec:
  version: 1
  enabled: true
  settings:
    controlPlaneConfigurator:
      dexCAMode: DoNotNeed
    # Enabling access to the API server through Ingress.
    # https://deckhouse.io/documentation/v1/modules/150-user-authn/configuration.html#parameters-publishapi
    publishAPI:
      enable: true
      https:
        mode: Global
        global:
          kubeconfigGeneratorMasterCA: ""
---
# cni-cilium module settings.
# https://deckhouse.io/documentation/v1/modules/021-cni-cilium/configuration.html
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: cni-cilium
spec:
  version: 1
  # Enable cni-cilium module
  enabled: true
  settings:
    # cni-cilium module settings
    # https://deckhouse.io/documentation/v1/modules/021-cni-cilium/configuration.html
    tunnelMode: VXLAN
---
# Cloud provider settings.
# https://deckhouse.io/documentation/v1/modules/030-cloud-provider-aws/cluster_configuration.html
apiVersion: deckhouse.io/v1
kind: AWSClusterConfiguration
layout: WithoutNAT
# AWS EC2 access parameters.
provider:
  providerAccessKeyId: *!CHANGE_MYACCESSKEY*
  providerSecretAccessKey: *!CHANGE_mYsEcReTkEy*
  # Cluster region.
  # You might consider changing this.
  region: eu-central-1
masterNodeGroup:
  replicas: 1
  instanceClass:
    # Master node VM disk size.
    # You might consider changing this.
    diskSizeGb: 30
    # Master node VM disk type to use.
    # You might consider changing this.
    diskType: gp3
    # Type of the instance.
    # You might consider changing this.
    instanceType: c5.xlarge
    # Amazon Machine Image ID.
    # The example uses the Ubuntu Server 22.04 image for the 'eu-central-1' region.
    # Change the AMI ID if you use a different region (the 'provider.region' parameter).
    # AMI Catalog in the AWS console: EC2 -> AMI Catalog.
    # You might consider changing this.
    ami: ami-0caef02b518350c8b
# Address space of the AWS cloud.
vpcNetworkCIDR: "10.241.0.0/16"
# Address space of the cluster's nodes.
nodeNetworkCIDR: "10.241.32.0/20"
# Public SSH key for accessing cloud nodes.
# This key will be added to the user on created nodes (the user name depends on the image used).
sshPublicKey: *!CHANGE_SSH_KEY*
# General cluster parameters. # https://deckhouse.io/documentation/v1/installing/configuration.html#clusterconfiguration apiVersion: deckhouse.io/v1 kind: ClusterConfiguration clusterType: Cloud cloud: provider: AWS # A prefix of objects that are created in the cloud during the installation. # You might consider changing this. prefix: cloud-demo # Address space of the cluster's Pods. podSubnetCIDR: 10.111.0.0/16 # Address space of the cluster's services. serviceSubnetCIDR: 10.222.0.0/16 kubernetesVersion: "Automatic" # Cluster domain (used for local routing). clusterDomain: "cluster.local" --- # Settings for the bootstrapping the Deckhouse cluster # https://deckhouse.io/documentation/v1/installing/configuration.html#initconfiguration apiVersion: deckhouse.io/v1 kind: InitConfiguration deckhouse: # Address of the Docker registry where the Deckhouse images are located imagesRepo: registry.deckhouse.io/deckhouse/ee # A special string with your token to access Docker registry (generated automatically for your license token) registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE> --- # Deckhouse module settings. # https://deckhouse.io/documentation/v1/modules/002-deckhouse/configuration.html apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: deckhouse spec: version: 1 enabled: true settings: bundle: Default releaseChannel: Stable logLevel: Info --- # Global Deckhouse settings. # https://deckhouse.ru/documentation/v1/deckhouse-configure-global.html#parameters apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: global spec: version: 1 settings: modules: # Template that will be used for system apps domains within the cluster. # E.g., Grafana for %s.example.com will be available as 'grafana.example.com'. # The domain MUST NOT match the one specified in the clusterDomain parameter of the ClusterConfiguration resource. # You can change it to your own or follow the steps in the guide and change it after installation. publicDomainTemplate: "%s.example.com" --- # user-authn module settings. # https://deckhouse.io/documentation/v1/modules/150-user-authn/configuration.html apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: user-authn spec: version: 1 enabled: true settings: controlPlaneConfigurator: dexCAMode: DoNotNeed # Enabling access to the API server through Ingress. # https://deckhouse.io/documentation/v1/modules/150-user-authn/configuration.html#parameters-publishapi publishAPI: enable: true https: mode: Global global: kubeconfigGeneratorMasterCA: "" --- # cni-cilium module settings. # https://deckhouse.io/documentation/v1/modules/021-cni-cilium/configuration.html apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: cni-cilium spec: version: 1 # Enable cni-cilium module enabled: true settings: # cni-cilium module settings # https://deckhouse.io/documentation/v1/modules/021-cni-cilium/configuration.html tunnelMode: VXLAN --- # Cloud provider settings. # https://deckhouse.io/documentation/v1/modules/030-cloud-provider-aws/cluster_configuration.html apiVersion: deckhouse.io/v1 kind: AWSClusterConfiguration layout: WithoutNAT # AWS EC2 access parameters. provider: providerAccessKeyId: *!CHANGE_MYACCESSKEY* providerSecretAccessKey: *!CHANGE_mYsEcReTkEy* # Cluster region. # You might consider changing this. region: eu-central-1 masterNodeGroup: replicas: 1 instanceClass: # Master node VM disk size. # You might consider changing this. diskSizeGb: 30 # Master node VM disk type to use. # You might consider changing this. diskType: gp3 # Type of the instance. # You might consider changing this. instanceType: c5.xlarge # Amazon Machine Image ID. # The example uses the Ubuntu Server 22.04 image for the 'eu-central-1' region. # Change the AMI ID if you use a different region (the 'provider.region' parameter). # AMI Catalog in the AWS console: EC2 -> AMI Catalog. # You might consider changing this. ami: ami-0caef02b518350c8b # Address space of the AWS cloud. vpcNetworkCIDR: "10.241.0.0/16" # Address space of the cluster's nodes. nodeNetworkCIDR: "10.241.32.0/20" # Public SSH key for accessing cloud nodes. # This key will be added to the user on created nodes (the user name depends on the image used). sshPublicKey: *!CHANGE_SSH_KEY*

Create the resources.yml file.

# Section containing the parameters of instance class for worker nodes.
# https://deckhouse.io/documentation/v1/modules/030-cloud-provider-aws/cr.html
apiVersion: deckhouse.io/v1
kind: AWSInstanceClass
metadata:
  name: worker
spec:
  # VM disk size.
  # You might consider changing this.
  diskSizeGb: 30
  # VM disk type to use.
  # You might consider changing this.
  diskType: gp3
  # Type of the instance.
  # You might consider changing this.
  instanceType: c5.xlarge
---
# Section containing the parameters of worker node group.
# https://deckhouse.io/documentation/v1/modules/040-node-manager/cr.html#nodegroup
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: worker
spec:
  cloudInstances:
    classReference:
      kind: AWSInstanceClass
      name: worker
    # The maximum number of instances for the group in each zone (used by the autoscaler).
    # You might consider changing this.
    maxPerZone: 1
    # The minimum number of instances for the group in each zone.
    minPerZone: 1
    # List of availability zones to create instances in.
    # You might consider changing this.
    zones:
      - eu-central-1a
  disruptions:
    approvalMode: Automatic
  nodeType: CloudEphemeral
---
# Section containing the parameters of NGINX Ingress controller.
# https://deckhouse.io/documentation/v1/modules/402-ingress-nginx/cr.html
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
  name: nginx
spec:
  ingressClass: nginx
  inlet: LoadBalancer
  loadBalancer:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
  # Describes on which nodes the Ingress Controller will be located. Label node.deckhouse.io/group: <NAME_GROUP_NAME> is set automatically.
  nodeSelector:
    node.deckhouse.io/group: worker
---
# RBAC and authorization settings.
# https://deckhouse.io/documentation/v1/modules/140-user-authz/cr.html#clusterauthorizationrule
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
  name: admin
spec:
  subjects:
  - kind: User
    name: admin@deckhouse.io
  accessLevel: SuperAdmin
  portForwarding: true
---
# Parameters of the static user.
# https://deckhouse.io/documentation/v1/modules/150-user-authn/cr.html#user
apiVersion: deckhouse.io/v1
kind: User
metadata:
  name: admin
spec:
  # User e-mail.
  email: admin@deckhouse.io
  # This is a hash of the password <GENERATED_PASSWORD>, generated when loading the page of the Getting Started.
  # Generate your own or use it at your own risk (for testing purposes)
  # echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2 | base64 -w0
  # You might consider changing this.
  password: <GENERATED_PASSWORD_HASH>
# Section containing the parameters of instance class for worker nodes. # https://deckhouse.io/documentation/v1/modules/030-cloud-provider-aws/cr.html apiVersion: deckhouse.io/v1 kind: AWSInstanceClass metadata: name: worker spec: # VM disk size. # You might consider changing this. diskSizeGb: 30 # VM disk type to use. # You might consider changing this. diskType: gp3 # Type of the instance. # You might consider changing this. instanceType: c5.xlarge --- # Section containing the parameters of worker node group. # https://deckhouse.io/documentation/v1/modules/040-node-manager/cr.html#nodegroup apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: cloudInstances: classReference: kind: AWSInstanceClass name: worker # The maximum number of instances for the group in each zone (used by the autoscaler). # You might consider changing this. maxPerZone: 1 # The minimum number of instances for the group in each zone. minPerZone: 1 # List of availability zones to create instances in. # You might consider changing this. zones: - eu-central-1a disruptions: approvalMode: Automatic nodeType: CloudEphemeral --- # Section containing the parameters of NGINX Ingress controller. # https://deckhouse.io/documentation/v1/modules/402-ingress-nginx/cr.html apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: ingressClass: nginx inlet: LoadBalancer loadBalancer: annotations: service.beta.kubernetes.io/aws-load-balancer-type: nlb # Describes on which nodes the Ingress Controller will be located. Label node.deckhouse.io/group: <NAME_GROUP_NAME> is set automatically. nodeSelector: node.deckhouse.io/group: worker --- # RBAC and authorization settings. # https://deckhouse.io/documentation/v1/modules/140-user-authz/cr.html#clusterauthorizationrule apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: subjects: - kind: User name: admin@deckhouse.io accessLevel: SuperAdmin portForwarding: true --- # Parameters of the static user. # https://deckhouse.io/documentation/v1/modules/150-user-authn/cr.html#user apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: # User e-mail. email: admin@deckhouse.io # This is a hash of the password <GENERATED_PASSWORD>, generated when loading the page of the Getting Started. # Generate your own or use it at your own risk (for testing purposes) # echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2 | base64 -w0 # You might consider changing this. password: <GENERATED_PASSWORD_HASH>

Use a Docker image to install the Deckhouse Kubernetes Platform. It is necessary to transfer configuration files to the container as well as SSH keys for accessing the master node (further, it is assumed that the SSH key ~/.ssh/id_rsa is used).

Run the installer on the personal computer.

Linux / macOS Windows

docker run --pull=always -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \
  -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ce/install:stable bash
docker run --pull=always -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \ -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ce/install:stable bash
docker run --pull=always -it -v "%cd%\config.yml:/config.yml" -v "%userprofile%\.ssh\:/tmp/.ssh/" -v "%cd%\resources.yml:/resources.yml" -v "%cd%\dhctl-tmp:/tmp/dhctl"  registry.deckhouse.io/deckhouse/ce/install:stable bash -c "chmod 400 /tmp/.ssh/id_rsa; bash"
docker run --pull=always -it -v "%cd%\config.yml:/config.yml" -v "%userprofile%\.ssh\:/tmp/.ssh/" -v "%cd%\resources.yml:/resources.yml" -v "%cd%\dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ce/install:stable bash -c "chmod 400 /tmp/.ssh/id_rsa; bash"

Now, to initiate the process of installation, you need to execute inside the container:

dhctl bootstrap --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml --resources=/resources.yml
dhctl bootstrap --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml --resources=/resources.yml

The --ssh-user parameter here refers to the default user for the relevant VM image. It is ubuntu for the image suggested in this guide.

If the installation was interrupted...

If the installation was interrupted (there were not enough quotas, network errors, etc.), you can restart it. The installation will continue correctly, no duplicate resources will be created in the cloud.

If the installation failed, and you need to delete the resources created in the cloud, run the following command:

  dhctl bootstrap-phase abort --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml

The installation process may take about from 5 to 30 minutes, depending on the connection.

After the installation is complete, the installer will output the IP of the master node (you will need it further). Example output:

...
┌ 🎈 ~ Common: Kubernetes Master Node addresses for SSH
│ cloud-demo-master-0 | ssh ubuntu@1.2.3.4
└ 🎈 ~ Common: Kubernetes Master Node addresses for SSH (0.00 seconds)

Almost everything is ready for a fully-fledged Deckhouse Kubernetes Platform to work!

Use a Docker image to install the Deckhouse Kubernetes Platform. It is necessary to transfer configuration files to the container as well as SSH keys for accessing the master node (further, it is assumed that the SSH key ~/.ssh/id_rsa is used).

Run the installer on the personal computer.

Linux / macOS Windows

 echo <LICENSE_TOKEN> | docker login -u license-token --password-stdin registry.deckhouse.io
docker run --pull=always -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \
  -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash
echo <LICENSE_TOKEN> | docker login -u license-token --password-stdin registry.deckhouse.io docker run --pull=always -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \ -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash

Log in on the personal computer to the container image registry by providing the license key as a password:

docker login -u license-token registry.deckhouse.io
docker login -u license-token registry.deckhouse.io

Run a container with the installer:

docker run --pull=always -it -v "%cd%\config.yml:/config.yml" -v "%userprofile%\.ssh\:/tmp/.ssh/" -v "%cd%\resources.yml:/resources.yml" -v "%cd%\dhctl-tmp:/tmp/dhctl"  registry.deckhouse.io/deckhouse/ee/install:stable bash -c "chmod 400 /tmp/.ssh/id_rsa; bash"
docker run --pull=always -it -v "%cd%\config.yml:/config.yml" -v "%userprofile%\.ssh\:/tmp/.ssh/" -v "%cd%\resources.yml:/resources.yml" -v "%cd%\dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash -c "chmod 400 /tmp/.ssh/id_rsa; bash"

Now, to initiate the process of installation, you need to execute inside the container:

dhctl bootstrap --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml --resources=/resources.yml
dhctl bootstrap --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml --resources=/resources.yml

The --ssh-user parameter here refers to the default user for the relevant VM image. It is ubuntu for the image suggested in this guide.

If the installation was interrupted...

If the installation was interrupted (there were not enough quotas, network errors, etc.), you can restart it. The installation will continue correctly, no duplicate resources will be created in the cloud.

If the installation failed, and you need to delete the resources created in the cloud, run the following command:

  dhctl bootstrap-phase abort --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml

The installation process may take about from 5 to 30 minutes, depending on the connection.

After the installation is complete, the installer will output the IP of the master node (you will need it further). Example output:

...
┌ 🎈 ~ Common: Kubernetes Master Node addresses for SSH
│ cloud-demo-master-0 | ssh ubuntu@1.2.3.4
└ 🎈 ~ Common: Kubernetes Master Node addresses for SSH (0.00 seconds)

Almost everything is ready for a fully-fledged Deckhouse Kubernetes Platform to work!