Deckhouse Platform in Yandex.Cloud

Select the Deckhouse Platform revision

The recommended settings for a Deckhouse Platform Community Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

The other available cloud provider related options are described in the documentation.

To learn more about the Deckhouse Platform release channels, please see the relevant documentation.

# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
  # type of the cloud provider
  provider: Yandex
  # prefix to differentiate cluster objects (can be used, e.g., in routing)
  prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.19"
# cluster domain (used for local addressing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
  # the release channel in use
  releaseChannel: Stable
  configOverrides:
    global:
      modules:
        # template that will be used for system apps domains within the cluster
        # e.g., Grafana for %s.example.com will be available as grafana.example.com
        publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: YandexClusterConfiguration
# pre-defined layout from Deckhouse
layout: <layout>
# Yandex account parameters
provider:
  # the cloud ID
  cloudID: *!CHANGE_CloudID*
  # the folder ID
  folderID: *!CHANGE_FolderID*
  # a JSON key (formatted as a single line!) generated by `yc iam key create`
  # and then processed with `cat deckhouse-sa-key.json | jq -c`
  serviceAccountJSON: *!CHANGE_ServiceAccountJSON*
masterNodeGroup:
  # number of replicas
  # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
  replicas: 1
  # Parameters of the VM image
  instanceClass:
    # CPU cores number
    cores: 4
    # RAM in MB
    memory: 8192
    # Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image;
    # to get one you can use this one-liner:
    # yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id'
    # you might consider changing this
    imageID: fd83klic6c8gfgi40urb
    # a list of IPs that will be assigned to masters; Auto means assign automatically
    externalIPAddresses:
    - "Auto"
# this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones
nodeNetworkCIDR: "10.241.32.0/20"
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
# general cluster parameters (ClusterConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: ClusterConfiguration # type of the infrastructure: bare metal (Static) or Cloud (Cloud) clusterType: Cloud # cloud provider-related settings cloud: # type of the cloud provider provider: Yandex # prefix to differentiate cluster objects (can be used, e.g., in routing) prefix: "cloud-demo" # address space of the cluster's Pods podSubnetCIDR: 10.111.0.0/16 # address space of the cluster's services serviceSubnetCIDR: 10.222.0.0/16 # Kubernetes version to install kubernetesVersion: "1.19" # cluster domain (used for local addressing) clusterDomain: "cluster.local" --- # section for bootstrapping the Deckhouse cluster (InitConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: InitConfiguration # Deckhouse parameters deckhouse: # the release channel in use releaseChannel: Stable configOverrides: global: modules: # template that will be used for system apps domains within the cluster # e.g., Grafana for %s.example.com will be available as grafana.example.com publicDomainTemplate: "%s.example.com" --- # section containing the parameters of the cloud provider # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: YandexClusterConfiguration # pre-defined layout from Deckhouse layout: <layout> # Yandex account parameters provider: # the cloud ID cloudID: *!CHANGE_CloudID* # the folder ID folderID: *!CHANGE_FolderID* # a JSON key (formatted as a single line!) generated by `yc iam key create` # and then processed with `cat deckhouse-sa-key.json | jq -c` serviceAccountJSON: *!CHANGE_ServiceAccountJSON* masterNodeGroup: # number of replicas # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes replicas: 1 # Parameters of the VM image instanceClass: # CPU cores number cores: 4 # RAM in MB memory: 8192 # Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image; # to get one you can use this one-liner: # yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id' # you might consider changing this imageID: fd83klic6c8gfgi40urb # a list of IPs that will be assigned to masters; Auto means assign automatically externalIPAddresses: - "Auto" # this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones nodeNetworkCIDR: "10.241.32.0/20" # public SSH key for accessing cloud nodes sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>

The recommended settings for a Deckhouse Platform Community Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

The other available cloud provider related options are described in the documentation.

To learn more about the Deckhouse Platform release channels, please see the relevant documentation.

# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
  # type of the cloud provider
  provider: Yandex
  # prefix to differentiate cluster objects (can be used, e.g., in routing)
  prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.19"
# cluster domain (used for local addressing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
  # the release channel in use
  releaseChannel: Stable
  configOverrides:
    global:
      modules:
        # template that will be used for system apps domains within the cluster
        # e.g., Grafana for %s.example.com will be available as grafana.example.com
        publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: YandexClusterConfiguration
# pre-defined layout from Deckhouse
layout: <layout>
# Yandex account parameters
provider:
  # the cloud ID
  cloudID: *!CHANGE_CloudID*
  # the folder ID
  folderID: *!CHANGE_FolderID*
  # a JSON key (formatted as a single line!) generated by `yc iam key create`
  # and then processed with `cat deckhouse-sa-key.json | jq -c`
  serviceAccountJSON: *!CHANGE_ServiceAccountJSON*
masterNodeGroup:
  # number of replicas
  # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
  replicas: 1
  # Parameters of the VM image
  instanceClass:
    # CPU cores number
    cores: 4
    # RAM in MB
    memory: 8192
    # Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image;
    # to get one you can use this one-liner:
    # yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id'
    # you might consider changing this
    imageID: fd83klic6c8gfgi40urb
    externalIPAddresses:
    - "Auto"
# this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones
nodeNetworkCIDR: "10.241.32.0/20"
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
# general cluster parameters (ClusterConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: ClusterConfiguration # type of the infrastructure: bare metal (Static) or Cloud (Cloud) clusterType: Cloud # cloud provider-related settings cloud: # type of the cloud provider provider: Yandex # prefix to differentiate cluster objects (can be used, e.g., in routing) prefix: "cloud-demo" # address space of the cluster's Pods podSubnetCIDR: 10.111.0.0/16 # address space of the cluster's services serviceSubnetCIDR: 10.222.0.0/16 # Kubernetes version to install kubernetesVersion: "1.19" # cluster domain (used for local addressing) clusterDomain: "cluster.local" --- # section for bootstrapping the Deckhouse cluster (InitConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: InitConfiguration # Deckhouse parameters deckhouse: # the release channel in use releaseChannel: Stable configOverrides: global: modules: # template that will be used for system apps domains within the cluster # e.g., Grafana for %s.example.com will be available as grafana.example.com publicDomainTemplate: "%s.example.com" --- # section containing the parameters of the cloud provider # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: YandexClusterConfiguration # pre-defined layout from Deckhouse layout: <layout> # Yandex account parameters provider: # the cloud ID cloudID: *!CHANGE_CloudID* # the folder ID folderID: *!CHANGE_FolderID* # a JSON key (formatted as a single line!) generated by `yc iam key create` # and then processed with `cat deckhouse-sa-key.json | jq -c` serviceAccountJSON: *!CHANGE_ServiceAccountJSON* masterNodeGroup: # number of replicas # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes replicas: 1 # Parameters of the VM image instanceClass: # CPU cores number cores: 4 # RAM in MB memory: 8192 # Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image; # to get one you can use this one-liner: # yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id' # you might consider changing this imageID: fd83klic6c8gfgi40urb externalIPAddresses: - "Auto" # this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones nodeNetworkCIDR: "10.241.32.0/20" # public SSH key for accessing cloud nodes sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>

The recommended settings for a Deckhouse Platform Community Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

The other available cloud provider related options are described in the documentation.

To learn more about the Deckhouse Platform release channels, please see the relevant documentation.

# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
  # type of the cloud provider
  provider: Yandex
  # prefix to differentiate cluster objects (can be used, e.g., in routing)
  prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.19"
# cluster domain (used for local addressing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
  # the release channel in use
  releaseChannel: Stable
  configOverrides:
    global:
      modules:
        # template that will be used for system apps domains within the cluster
        # e.g., Grafana for %s.example.com will be available as grafana.example.com
        publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: YandexClusterConfiguration
# pre-defined layout from Deckhouse
layout: <layout>
# special parameters for WithNATInstance layout
withNATInstance: {}
# Yandex account parameters
provider:
  # the cloud ID
  cloudID: *!CHANGE_CloudID*
  # the folder ID
  folderID: *!CHANGE_FolderID*
  # a JSON key (formatted as a single line!) generated by `yc iam key create`
  # and then processed with `cat deckhouse-sa-key.json | jq -c`
  serviceAccountJSON: *!CHANGE_ServiceAccountJSON*
masterNodeGroup:
  # number of replicas
  # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
  replicas: 1
  # Parameters of the VM image
  instanceClass:
    # CPU cores number
    cores: 4
    # RAM in MB
    memory: 8192
    # Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image;
    # to get one you can use this one-liner:
    # yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id'
    # you might consider changing this
    imageID: fd83klic6c8gfgi40urb
    externalIPAddresses:
    - "Auto"
# this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones
nodeNetworkCIDR: "10.241.32.0/20"
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
# general cluster parameters (ClusterConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: ClusterConfiguration # type of the infrastructure: bare metal (Static) or Cloud (Cloud) clusterType: Cloud # cloud provider-related settings cloud: # type of the cloud provider provider: Yandex # prefix to differentiate cluster objects (can be used, e.g., in routing) prefix: "cloud-demo" # address space of the cluster's Pods podSubnetCIDR: 10.111.0.0/16 # address space of the cluster's services serviceSubnetCIDR: 10.222.0.0/16 # Kubernetes version to install kubernetesVersion: "1.19" # cluster domain (used for local addressing) clusterDomain: "cluster.local" --- # section for bootstrapping the Deckhouse cluster (InitConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: InitConfiguration # Deckhouse parameters deckhouse: # the release channel in use releaseChannel: Stable configOverrides: global: modules: # template that will be used for system apps domains within the cluster # e.g., Grafana for %s.example.com will be available as grafana.example.com publicDomainTemplate: "%s.example.com" --- # section containing the parameters of the cloud provider # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: YandexClusterConfiguration # pre-defined layout from Deckhouse layout: <layout> # special parameters for WithNATInstance layout withNATInstance: {} # Yandex account parameters provider: # the cloud ID cloudID: *!CHANGE_CloudID* # the folder ID folderID: *!CHANGE_FolderID* # a JSON key (formatted as a single line!) generated by `yc iam key create` # and then processed with `cat deckhouse-sa-key.json | jq -c` serviceAccountJSON: *!CHANGE_ServiceAccountJSON* masterNodeGroup: # number of replicas # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes replicas: 1 # Parameters of the VM image instanceClass: # CPU cores number cores: 4 # RAM in MB memory: 8192 # Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image; # to get one you can use this one-liner: # yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id' # you might consider changing this imageID: fd83klic6c8gfgi40urb externalIPAddresses: - "Auto" # this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones nodeNetworkCIDR: "10.241.32.0/20" # public SSH key for accessing cloud nodes sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>

Deckhouse Platform Enterprise Edition license key

The license key is used by Deckhouse components to access the geo-distributed container registry, where all images used by the Deckhouse are stored.

The commands and configuration files on this page are generated using the license key you entered.

Request access

Fill out this form and we will send you access credentials via email.

Enter license key

The recommended settings for a Deckhouse Platform Enterprise Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

The other available cloud provider related options are described in the documentation.

To learn more about the Deckhouse Platform release channels, please see the relevant documentation.

# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
  # type of the cloud provider
  provider: Yandex
  # prefix to differentiate cluster objects (can be used, e.g., in routing)
  prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.19"
# cluster domain (used for local addressing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
  # address of the Docker registry where the Deckhouse images are located
  imagesRepo: registry.deckhouse.io/deckhouse/ee
  # a special string with your token to access Docker registry (generated automatically for your license token)
  registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
  # the release channel in use
  releaseChannel: Stable
  configOverrides:
    global:
      modules:
        # template that will be used for system apps domains within the cluster
        # e.g., Grafana for %s.example.com will be available as grafana.example.com
        publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: YandexClusterConfiguration
# pre-defined layout from Deckhouse
layout: <layout>
# Yandex account parameters
provider:
  # the cloud ID
  cloudID: *!CHANGE_CloudID*
  # the folder ID
  folderID: *!CHANGE_FolderID*
  # a JSON key (formatted as a single line!) generated by `yc iam key create`
  # and then processed with `cat deckhouse-sa-key.json | jq -c`
  serviceAccountJSON: *!CHANGE_ServiceAccountJSON*
masterNodeGroup:
  # number of replicas
  # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
  replicas: 1
  # Parameters of the VM image
  instanceClass:
    # CPU cores number
    cores: 4
    # RAM in MB
    memory: 8192
    # Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image;
    # to get one you can use this one-liner:
    # yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id'
    # you might consider changing this
    imageID: fd83klic6c8gfgi40urb
    # a list of IPs that will be assigned to masters; Auto means assign automatically
    externalIPAddresses:
    - "Auto"
# this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones
nodeNetworkCIDR: "10.241.32.0/20"
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
# general cluster parameters (ClusterConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: ClusterConfiguration # type of the infrastructure: bare metal (Static) or Cloud (Cloud) clusterType: Cloud # cloud provider-related settings cloud: # type of the cloud provider provider: Yandex # prefix to differentiate cluster objects (can be used, e.g., in routing) prefix: "cloud-demo" # address space of the cluster's Pods podSubnetCIDR: 10.111.0.0/16 # address space of the cluster's services serviceSubnetCIDR: 10.222.0.0/16 # Kubernetes version to install kubernetesVersion: "1.19" # cluster domain (used for local addressing) clusterDomain: "cluster.local" --- # section for bootstrapping the Deckhouse cluster (InitConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: InitConfiguration # Deckhouse parameters deckhouse: # address of the Docker registry where the Deckhouse images are located imagesRepo: registry.deckhouse.io/deckhouse/ee # a special string with your token to access Docker registry (generated automatically for your license token) registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE> # the release channel in use releaseChannel: Stable configOverrides: global: modules: # template that will be used for system apps domains within the cluster # e.g., Grafana for %s.example.com will be available as grafana.example.com publicDomainTemplate: "%s.example.com" --- # section containing the parameters of the cloud provider # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: YandexClusterConfiguration # pre-defined layout from Deckhouse layout: <layout> # Yandex account parameters provider: # the cloud ID cloudID: *!CHANGE_CloudID* # the folder ID folderID: *!CHANGE_FolderID* # a JSON key (formatted as a single line!) generated by `yc iam key create` # and then processed with `cat deckhouse-sa-key.json | jq -c` serviceAccountJSON: *!CHANGE_ServiceAccountJSON* masterNodeGroup: # number of replicas # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes replicas: 1 # Parameters of the VM image instanceClass: # CPU cores number cores: 4 # RAM in MB memory: 8192 # Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image; # to get one you can use this one-liner: # yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id' # you might consider changing this imageID: fd83klic6c8gfgi40urb # a list of IPs that will be assigned to masters; Auto means assign automatically externalIPAddresses: - "Auto" # this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones nodeNetworkCIDR: "10.241.32.0/20" # public SSH key for accessing cloud nodes sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>

Deckhouse Platform Enterprise Edition license key

The license key is used by Deckhouse components to access the geo-distributed container registry, where all images used by the Deckhouse are stored.

The commands and configuration files on this page are generated using the license key you entered.

Request access

Fill out this form and we will send you access credentials via email.

Enter license key

The recommended settings for a Deckhouse Platform Enterprise Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

The other available cloud provider related options are described in the documentation.

To learn more about the Deckhouse Platform release channels, please see the relevant documentation.

# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
  # type of the cloud provider
  provider: Yandex
  # prefix to differentiate cluster objects (can be used, e.g., in routing)
  prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.19"
# cluster domain (used for local addressing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
  # address of the Docker registry where the Deckhouse images are located
  imagesRepo: registry.deckhouse.io/deckhouse/ee
  # a special string with your token to access Docker registry (generated automatically for your license token)
  registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
  # the release channel in use
  releaseChannel: Stable
  configOverrides:
    global:
      modules:
        # template that will be used for system apps domains within the cluster
        # e.g., Grafana for %s.example.com will be available as grafana.example.com
        publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: YandexClusterConfiguration
# pre-defined layout from Deckhouse
layout: <layout>
# Yandex account parameters
provider:
  # the cloud ID
  cloudID: *!CHANGE_CloudID*
  # the folder ID
  folderID: *!CHANGE_FolderID*
  # a JSON key (formatted as a single line!) generated by `yc iam key create`
  # and then processed with `cat deckhouse-sa-key.json | jq -c`
  serviceAccountJSON: *!CHANGE_ServiceAccountJSON*
masterNodeGroup:
  # number of replicas
  # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
  replicas: 1
  # Parameters of the VM image
  instanceClass:
    # CPU cores number
    cores: 4
    # RAM in MB
    memory: 8192
    # Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image;
    # to get one you can use this one-liner:
    # yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id'
    # you might consider changing this
    imageID: fd83klic6c8gfgi40urb
    externalIPAddresses:
    - "Auto"
# this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones
nodeNetworkCIDR: "10.241.32.0/20"
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
# general cluster parameters (ClusterConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: ClusterConfiguration # type of the infrastructure: bare metal (Static) or Cloud (Cloud) clusterType: Cloud # cloud provider-related settings cloud: # type of the cloud provider provider: Yandex # prefix to differentiate cluster objects (can be used, e.g., in routing) prefix: "cloud-demo" # address space of the cluster's Pods podSubnetCIDR: 10.111.0.0/16 # address space of the cluster's services serviceSubnetCIDR: 10.222.0.0/16 # Kubernetes version to install kubernetesVersion: "1.19" # cluster domain (used for local addressing) clusterDomain: "cluster.local" --- # section for bootstrapping the Deckhouse cluster (InitConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: InitConfiguration # Deckhouse parameters deckhouse: # address of the Docker registry where the Deckhouse images are located imagesRepo: registry.deckhouse.io/deckhouse/ee # a special string with your token to access Docker registry (generated automatically for your license token) registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE> # the release channel in use releaseChannel: Stable configOverrides: global: modules: # template that will be used for system apps domains within the cluster # e.g., Grafana for %s.example.com will be available as grafana.example.com publicDomainTemplate: "%s.example.com" --- # section containing the parameters of the cloud provider # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: YandexClusterConfiguration # pre-defined layout from Deckhouse layout: <layout> # Yandex account parameters provider: # the cloud ID cloudID: *!CHANGE_CloudID* # the folder ID folderID: *!CHANGE_FolderID* # a JSON key (formatted as a single line!) generated by `yc iam key create` # and then processed with `cat deckhouse-sa-key.json | jq -c` serviceAccountJSON: *!CHANGE_ServiceAccountJSON* masterNodeGroup: # number of replicas # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes replicas: 1 # Parameters of the VM image instanceClass: # CPU cores number cores: 4 # RAM in MB memory: 8192 # Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image; # to get one you can use this one-liner: # yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id' # you might consider changing this imageID: fd83klic6c8gfgi40urb externalIPAddresses: - "Auto" # this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones nodeNetworkCIDR: "10.241.32.0/20" # public SSH key for accessing cloud nodes sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>

Deckhouse Platform Enterprise Edition license key

The license key is used by Deckhouse components to access the geo-distributed container registry, where all images used by the Deckhouse are stored.

The commands and configuration files on this page are generated using the license key you entered.

Request access

Fill out this form and we will send you access credentials via email.

Enter license key

The recommended settings for a Deckhouse Platform Enterprise Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

The other available cloud provider related options are described in the documentation.

To learn more about the Deckhouse Platform release channels, please see the relevant documentation.

# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
  # type of the cloud provider
  provider: Yandex
  # prefix to differentiate cluster objects (can be used, e.g., in routing)
  prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.19"
# cluster domain (used for local addressing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
  # address of the Docker registry where the Deckhouse images are located
  imagesRepo: registry.deckhouse.io/deckhouse/ee
  # a special string with your token to access Docker registry (generated automatically for your license token)
  registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
  # the release channel in use
  releaseChannel: Stable
  configOverrides:
    global:
      modules:
        # template that will be used for system apps domains within the cluster
        # e.g., Grafana for %s.example.com will be available as grafana.example.com
        publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: YandexClusterConfiguration
# pre-defined layout from Deckhouse
layout: <layout>
# special parameters for WithNATInstance layout
withNATInstance: {}
# Yandex account parameters
provider:
  # the cloud ID
  cloudID: *!CHANGE_CloudID*
  # the folder ID
  folderID: *!CHANGE_FolderID*
  # a JSON key (formatted as a single line!) generated by `yc iam key create`
  # and then processed with `cat deckhouse-sa-key.json | jq -c`
  serviceAccountJSON: *!CHANGE_ServiceAccountJSON*
masterNodeGroup:
  # number of replicas
  # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
  replicas: 1
  # Parameters of the VM image
  instanceClass:
    # CPU cores number
    cores: 4
    # RAM in MB
    memory: 8192
    # Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image;
    # to get one you can use this one-liner:
    # yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id'
    # you might consider changing this
    imageID: fd83klic6c8gfgi40urb
    externalIPAddresses:
    - "Auto"
# this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones
nodeNetworkCIDR: "10.241.32.0/20"
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
# general cluster parameters (ClusterConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: ClusterConfiguration # type of the infrastructure: bare metal (Static) or Cloud (Cloud) clusterType: Cloud # cloud provider-related settings cloud: # type of the cloud provider provider: Yandex # prefix to differentiate cluster objects (can be used, e.g., in routing) prefix: "cloud-demo" # address space of the cluster's Pods podSubnetCIDR: 10.111.0.0/16 # address space of the cluster's services serviceSubnetCIDR: 10.222.0.0/16 # Kubernetes version to install kubernetesVersion: "1.19" # cluster domain (used for local addressing) clusterDomain: "cluster.local" --- # section for bootstrapping the Deckhouse cluster (InitConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: InitConfiguration # Deckhouse parameters deckhouse: # address of the Docker registry where the Deckhouse images are located imagesRepo: registry.deckhouse.io/deckhouse/ee # a special string with your token to access Docker registry (generated automatically for your license token) registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE> # the release channel in use releaseChannel: Stable configOverrides: global: modules: # template that will be used for system apps domains within the cluster # e.g., Grafana for %s.example.com will be available as grafana.example.com publicDomainTemplate: "%s.example.com" --- # section containing the parameters of the cloud provider # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: YandexClusterConfiguration # pre-defined layout from Deckhouse layout: <layout> # special parameters for WithNATInstance layout withNATInstance: {} # Yandex account parameters provider: # the cloud ID cloudID: *!CHANGE_CloudID* # the folder ID folderID: *!CHANGE_FolderID* # a JSON key (formatted as a single line!) generated by `yc iam key create` # and then processed with `cat deckhouse-sa-key.json | jq -c` serviceAccountJSON: *!CHANGE_ServiceAccountJSON* masterNodeGroup: # number of replicas # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes replicas: 1 # Parameters of the VM image instanceClass: # CPU cores number cores: 4 # RAM in MB memory: 8192 # Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image; # to get one you can use this one-liner: # yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id' # you might consider changing this imageID: fd83klic6c8gfgi40urb externalIPAddresses: - "Auto" # this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones nodeNetworkCIDR: "10.241.32.0/20" # public SSH key for accessing cloud nodes sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>

Resources for the “Minimal” preset.

apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: worker
spec:
  cloudInstances:
    classReference:
      kind: YandexInstanceClass
      name: worker
    maxPerZone: 1
    minPerZone: 1
    # you might consider changing this
    zones:
    - ru-central1-a
  disruptions:
    approvalMode: Automatic
  nodeTemplate:
    labels:
      node-role.deckhouse.io/worker: ""
  nodeType: CloudEphemeral
---
apiVersion: deckhouse.io/v1
kind: YandexInstanceClass
metadata:
  name: worker
spec:
  # you might consider changing this
  cores: 4
  # you might consider changing this
  memory: 8192
  # you might consider changing this
  diskSizeGb: 30
---
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
  name: nginx
spec:
  ingressClass: nginx
  inlet: LoadBalancer
  nodeSelector:
    node-role.deckhouse.io/worker: ""
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
  name: admin
spec:
  # Kubernetes RBAC accounts list
  subjects:
  - kind: User
    name: admin@example.com
  # pre-defined access template
  accessLevel: SuperAdmin
  # allow user to do kubectl port-forward
  portForwarding: true
---
apiVersion: deckhouse.io/v1
kind: User
metadata:
  name: admin
spec:
  email: admin@example.com
  # this is a hash for generated password: <GENERATED_PASSWORD>
  # you might consider changing this
  password: <GENERATED_PASSWORD_HASH>
apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: cloudInstances: classReference: kind: YandexInstanceClass name: worker maxPerZone: 1 minPerZone: 1 # you might consider changing this zones: - ru-central1-a disruptions: approvalMode: Automatic nodeTemplate: labels: node-role.deckhouse.io/worker: "" nodeType: CloudEphemeral --- apiVersion: deckhouse.io/v1 kind: YandexInstanceClass metadata: name: worker spec: # you might consider changing this cores: 4 # you might consider changing this memory: 8192 # you might consider changing this diskSizeGb: 30 --- apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: ingressClass: nginx inlet: LoadBalancer nodeSelector: node-role.deckhouse.io/worker: "" --- apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: # Kubernetes RBAC accounts list subjects: - kind: User name: admin@example.com # pre-defined access template accessLevel: SuperAdmin # allow user to do kubectl port-forward portForwarding: true --- apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: email: admin@example.com # this is a hash for generated password: <GENERATED_PASSWORD> # you might consider changing this password: <GENERATED_PASSWORD_HASH>

Resources for the “Multi-master” preset.

apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: worker
spec:
  cloudInstances:
    classReference:
      kind: YandexInstanceClass
      name: worker
    maxPerZone: 2
    minPerZone: 2
    # you might consider changing this
    zones:
    - ru-central1-a
  disruptions:
    approvalMode: Automatic
  nodeTemplate:
    labels:
      node-role.deckhouse.io/worker: ""
  nodeType: CloudEphemeral
---
apiVersion: deckhouse.io/v1
kind: YandexInstanceClass
metadata:
  name: worker
spec:
  # you might consider changing this
  cores: 4
  # you might consider changing this
  memory: 8192
  # you might consider changing this
  diskSizeGb: 30
---
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
  name: nginx
spec:
  ingressClass: nginx
  inlet: LoadBalancer
  loadBalancer:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
  nodeSelector:
    node-role.deckhouse.io/worker: ""
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
  name: admin
spec:
  # Kubernetes RBAC accounts list
  subjects:
  - kind: User
    name: admin@example.com
  # pre-defined access template
  accessLevel: SuperAdmin
  # allow user to do kubectl port-forward
  portForwarding: true
---
apiVersion: deckhouse.io/v1
kind: User
metadata:
  name: admin
spec:
  email: admin@example.com
  # this is a hash for generated password: <GENERATED_PASSWORD>
  # you might consider changing this
  password: <GENERATED_PASSWORD_HASH>
apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: cloudInstances: classReference: kind: YandexInstanceClass name: worker maxPerZone: 2 minPerZone: 2 # you might consider changing this zones: - ru-central1-a disruptions: approvalMode: Automatic nodeTemplate: labels: node-role.deckhouse.io/worker: "" nodeType: CloudEphemeral --- apiVersion: deckhouse.io/v1 kind: YandexInstanceClass metadata: name: worker spec: # you might consider changing this cores: 4 # you might consider changing this memory: 8192 # you might consider changing this diskSizeGb: 30 --- apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: ingressClass: nginx inlet: LoadBalancer loadBalancer: annotations: service.beta.kubernetes.io/aws-load-balancer-type: nlb nodeSelector: node-role.deckhouse.io/worker: "" --- apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: # Kubernetes RBAC accounts list subjects: - kind: User name: admin@example.com # pre-defined access template accessLevel: SuperAdmin # allow user to do kubectl port-forward portForwarding: true --- apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: email: admin@example.com # this is a hash for generated password: <GENERATED_PASSWORD> # you might consider changing this password: <GENERATED_PASSWORD_HASH>

Resources for the “Recommended for production” preset.

apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: system
spec:
  cloudInstances:
    classReference:
      kind: YandexInstanceClass
      name: system
    maxPerZone: 1
    minPerZone: 1
    # you might consider changing this
    zones:
    - ru-central1-a
    - ru-central1-b
  disruptions:
    approvalMode: Automatic
  nodeTemplate:
    labels:
      node-role.deckhouse.io/system: ""
    taints:
      - effect: NoExecute
        key: dedicated.deckhouse.io
        value: system
  nodeType: CloudEphemeral
---
apiVersion: deckhouse.io/v1
kind: YandexInstanceClass
metadata:
  name: system
spec:
  # you might consider changing this
  cores: 4
  # you might consider changing this
  memory: 8192
  # you might consider changing this
  diskSizeGb: 30
---
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: frontend
spec:
  cloudInstances:
    classReference:
      kind: YandexInstanceClass
      name: frontend
    maxPerZone: 2
    minPerZone: 1
  disruptions:
    approvalMode: Automatic
  nodeTemplate:
    labels:
      node-role.deckhouse.io/frontend: ""
    taints:
      - effect: NoExecute
        key: dedicated.deckhouse.io
        value: frontend
  nodeType: CloudEphemeral
---
apiVersion: deckhouse.io/v1
kind: YandexInstanceClass
metadata:
  name: frontend
# you might consider changing this
spec:
  cores: 2
  memory: 4096
  diskSizeGb: 30
---
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: worker
spec:
  cloudInstances:
    classReference:
      kind: YandexInstanceClass
      name: worker
    maxPerZone: 1
    minPerZone: 1
    # you might consider changing this
    zones:
    - ru-central1-c
  disruptions:
    approvalMode: Automatic
  nodeTemplate:
    labels:
      node-role.deckhouse.io/worker: ""
  nodeType: CloudEphemeral
---
apiVersion: deckhouse.io/v1
kind: YandexInstanceClass
metadata:
  name: worker
# you might consider changing this
spec:
  cores: 4
  memory: 8192
  diskSizeGb: 30
---
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
  name: nginx
spec:
  ingressClass: nginx
  inlet: LoadBalancer
  nodeSelector:
    node-role.deckhouse.io/frontend: ""
  maxReplicas: 3
  minReplicas: 2
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
  name: admin
spec:
  # Kubernetes RBAC accounts list
  subjects:
  - kind: User
    name: admin@example.com
  # pre-defined access template
  accessLevel: SuperAdmin
  # allow user to do kubectl port-forward
  portForwarding: true
---
apiVersion: deckhouse.io/v1
kind: User
metadata:
  name: admin
spec:
  email: admin@example.com
  # this is a hash for generated password: <GENERATED_PASSWORD>
  # you might consider changing this
  password: <GENERATED_PASSWORD_HASH>
apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: system spec: cloudInstances: classReference: kind: YandexInstanceClass name: system maxPerZone: 1 minPerZone: 1 # you might consider changing this zones: - ru-central1-a - ru-central1-b disruptions: approvalMode: Automatic nodeTemplate: labels: node-role.deckhouse.io/system: "" taints: - effect: NoExecute key: dedicated.deckhouse.io value: system nodeType: CloudEphemeral --- apiVersion: deckhouse.io/v1 kind: YandexInstanceClass metadata: name: system spec: # you might consider changing this cores: 4 # you might consider changing this memory: 8192 # you might consider changing this diskSizeGb: 30 --- apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: frontend spec: cloudInstances: classReference: kind: YandexInstanceClass name: frontend maxPerZone: 2 minPerZone: 1 disruptions: approvalMode: Automatic nodeTemplate: labels: node-role.deckhouse.io/frontend: "" taints: - effect: NoExecute key: dedicated.deckhouse.io value: frontend nodeType: CloudEphemeral --- apiVersion: deckhouse.io/v1 kind: YandexInstanceClass metadata: name: frontend # you might consider changing this spec: cores: 2 memory: 4096 diskSizeGb: 30 --- apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: cloudInstances: classReference: kind: YandexInstanceClass name: worker maxPerZone: 1 minPerZone: 1 # you might consider changing this zones: - ru-central1-c disruptions: approvalMode: Automatic nodeTemplate: labels: node-role.deckhouse.io/worker: "" nodeType: CloudEphemeral --- apiVersion: deckhouse.io/v1 kind: YandexInstanceClass metadata: name: worker # you might consider changing this spec: cores: 4 memory: 8192 diskSizeGb: 30 --- apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: ingressClass: nginx inlet: LoadBalancer nodeSelector: node-role.deckhouse.io/frontend: "" maxReplicas: 3 minReplicas: 2 --- apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: # Kubernetes RBAC accounts list subjects: - kind: User name: admin@example.com # pre-defined access template accessLevel: SuperAdmin # allow user to do kubectl port-forward portForwarding: true --- apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: email: admin@example.com # this is a hash for generated password: <GENERATED_PASSWORD> # you might consider changing this password: <GENERATED_PASSWORD_HASH>

To install the Deckhouse Platform, we will use a prebuilt Docker image. It is necessary to transfer configuration files to the container, as well as ssh-keys for access to the master nodes:

docker run -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \
 -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl"  registry.deckhouse.io/deckhouse/ce/install:stable bash
docker run -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \ -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ce/install:stable bash

Now, to initiate the process of installation, you need to execute inside the container:

dhctl bootstrap \
  --ssh-user=<username> \
  --ssh-agent-private-keys=/tmp/.ssh/id_rsa \
  --config=/config.yml \
  --resources=/resources.yml
dhctl bootstrap \ --ssh-user=<username> \ --ssh-agent-private-keys=/tmp/.ssh/id_rsa \ --config=/config.yml \ --resources=/resources.yml

username variable here refers to ubuntu (for the images suggested in this documentation). Notes:

  • The -v "$PWD/dhctl-tmp:/tmp/dhctl" parameter enables saving the state of the Terraform installer to a temporary directory on the startup host. It allows the installation to continue correctly in case of a failure of the installer’s container.

  • If any problems occur, you can cancel the process of installation and remove all created objects using the following command (the configuration file should be the same you’ve used to initiate the installation):

    dhctl bootstrap-phase abort --config=/config.yml
    
    dhctl bootstrap-phase abort --config=/config.yml

After the installation is complete, you will be returned to the command line.

Almost everything is ready for a fully-fledged Deckhouse Platform to work!

To install the Deckhouse Platform, we will use a prebuilt Docker image. It is necessary to transfer configuration files to the container, as well as ssh-keys for access to the master nodes:

 echo <LICENSE_TOKEN> | docker login -u license-token --password-stdin registry.deckhouse.io
docker run -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \
 -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl"  registry.deckhouse.io/deckhouse/ee/install:stable bash
echo <LICENSE_TOKEN> | docker login -u license-token --password-stdin registry.deckhouse.io docker run -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \ -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash

Now, to initiate the process of installation, you need to execute inside the container:

dhctl bootstrap \
  --ssh-user=<username> \
  --ssh-agent-private-keys=/tmp/.ssh/id_rsa \
  --config=/config.yml \
  --resources=/resources.yml
dhctl bootstrap \ --ssh-user=<username> \ --ssh-agent-private-keys=/tmp/.ssh/id_rsa \ --config=/config.yml \ --resources=/resources.yml

username variable here refers to ubuntu (for the images suggested in this documentation). Notes:

  • The -v "$PWD/dhctl-tmp:/tmp/dhctl" parameter enables saving the state of the Terraform installer to a temporary directory on the startup host. It allows the installation to continue correctly in case of a failure of the installer’s container.

  • If any problems occur, you can cancel the process of installation and remove all created objects using the following command (the configuration file should be the same you’ve used to initiate the installation):

    dhctl bootstrap-phase abort --config=/config.yml
    
    dhctl bootstrap-phase abort --config=/config.yml

After the installation is complete, you will be returned to the command line.

Almost everything is ready for a fully-fledged Deckhouse Platform to work!