Deckhouse Platform in Yandex.Cloud
Select the Deckhouse Platform revision
The recommended settings for a Deckhouse Platform Community Edition installation are generated below:
config.yml
— a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.resources.yml
— description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).
Please pay attention to:
- highlighted parameters you must define.
- parameters you might want to change.
The other available cloud provider related options are described in the documentation.
To learn more about the Deckhouse Platform release channels, please see the relevant documentation.
# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
# type of the cloud provider
provider: Yandex
# prefix to differentiate cluster objects (can be used, e.g., in routing)
prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.21"
# cluster domain (used for local addressing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
# the release channel in use
releaseChannel: Stable
configOverrides:
global:
modules:
# template that will be used for system apps domains within the cluster
# e.g., Grafana for %s.example.com will be available as grafana.example.com
publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: YandexClusterConfiguration
# pre-defined layout from Deckhouse
layout: <layout>
# Yandex account parameters
provider:
# the cloud ID
cloudID: *!CHANGE_CloudID*
# the folder ID
folderID: *!CHANGE_FolderID*
# a JSON key (formatted as a single line!) generated by `yc iam key create`
# and then processed with `cat deckhouse-sa-key.json | jq -c`
serviceAccountJSON: *!CHANGE_ServiceAccountJSON*
masterNodeGroup:
# number of replicas
# if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
replicas: 1
# Parameters of the VM image
instanceClass:
# CPU cores number
cores: 4
# RAM in MB
memory: 8192
# Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image;
# to get one you can use this one-liner:
# yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id'
# you might consider changing this
imageID: fd8firhksp7daa6msfes
# a list of IPs that will be assigned to masters; Auto means assign automatically
externalIPAddresses:
- "Auto"
# this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones
nodeNetworkCIDR: "10.241.32.0/20"
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
The recommended settings for a Deckhouse Platform Community Edition installation are generated below:
config.yml
— a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.resources.yml
— description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).
Please pay attention to:
- highlighted parameters you must define.
- parameters you might want to change.
The other available cloud provider related options are described in the documentation.
To learn more about the Deckhouse Platform release channels, please see the relevant documentation.
# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
# type of the cloud provider
provider: Yandex
# prefix to differentiate cluster objects (can be used, e.g., in routing)
prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.21"
# cluster domain (used for local addressing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
# the release channel in use
releaseChannel: Stable
configOverrides:
global:
modules:
# template that will be used for system apps domains within the cluster
# e.g., Grafana for %s.example.com will be available as grafana.example.com
publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: YandexClusterConfiguration
# pre-defined layout from Deckhouse
layout: <layout>
# Yandex account parameters
provider:
# the cloud ID
cloudID: *!CHANGE_CloudID*
# the folder ID
folderID: *!CHANGE_FolderID*
# a JSON key (formatted as a single line!) generated by `yc iam key create`
# and then processed with `cat deckhouse-sa-key.json | jq -c`
serviceAccountJSON: *!CHANGE_ServiceAccountJSON*
masterNodeGroup:
# number of replicas
# if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
replicas: 1
# Parameters of the VM image
instanceClass:
# CPU cores number
cores: 4
# RAM in MB
memory: 8192
# Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image;
# to get one you can use this one-liner:
# yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id'
# you might consider changing this
imageID: fd8firhksp7daa6msfes
externalIPAddresses:
- "Auto"
# this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones
nodeNetworkCIDR: "10.241.32.0/20"
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
The recommended settings for a Deckhouse Platform Community Edition installation are generated below:
config.yml
— a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.resources.yml
— description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).
Please pay attention to:
- highlighted parameters you must define.
- parameters you might want to change.
The other available cloud provider related options are described in the documentation.
To learn more about the Deckhouse Platform release channels, please see the relevant documentation.
# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
# type of the cloud provider
provider: Yandex
# prefix to differentiate cluster objects (can be used, e.g., in routing)
prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.21"
# cluster domain (used for local addressing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
# the release channel in use
releaseChannel: Stable
configOverrides:
global:
modules:
# template that will be used for system apps domains within the cluster
# e.g., Grafana for %s.example.com will be available as grafana.example.com
publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: YandexClusterConfiguration
# pre-defined layout from Deckhouse
layout: <layout>
# special parameters for WithNATInstance layout
withNATInstance: {}
# Yandex account parameters
provider:
# the cloud ID
cloudID: *!CHANGE_CloudID*
# the folder ID
folderID: *!CHANGE_FolderID*
# a JSON key (formatted as a single line!) generated by `yc iam key create`
# and then processed with `cat deckhouse-sa-key.json | jq -c`
serviceAccountJSON: *!CHANGE_ServiceAccountJSON*
masterNodeGroup:
# number of replicas
# if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
replicas: 1
# Parameters of the VM image
instanceClass:
# CPU cores number
cores: 4
# RAM in MB
memory: 8192
# Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image;
# to get one you can use this one-liner:
# yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id'
# you might consider changing this
imageID: fd8firhksp7daa6msfes
externalIPAddresses:
- "Auto"
# this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones
nodeNetworkCIDR: "10.241.32.0/20"
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
Deckhouse Platform Enterprise Edition license key
The license key is used by Deckhouse components to access the geo-distributed container registry, where all images used by the Deckhouse are stored.
The commands and configuration files on this page are generated using the license key you entered.
Request access
Fill out this form and we will send you access credentials via email.
Enter license key
The recommended settings for a Deckhouse Platform Enterprise Edition installation are generated below:
config.yml
— a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.resources.yml
— description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).
Please pay attention to:
- highlighted parameters you must define.
- parameters you might want to change.
The other available cloud provider related options are described in the documentation.
To learn more about the Deckhouse Platform release channels, please see the relevant documentation.
# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
# type of the cloud provider
provider: Yandex
# prefix to differentiate cluster objects (can be used, e.g., in routing)
prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.21"
# cluster domain (used for local addressing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
# address of the Docker registry where the Deckhouse images are located
imagesRepo: registry.deckhouse.io/deckhouse/ee
# a special string with your token to access Docker registry (generated automatically for your license token)
registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
# the release channel in use
releaseChannel: Stable
configOverrides:
global:
modules:
# template that will be used for system apps domains within the cluster
# e.g., Grafana for %s.example.com will be available as grafana.example.com
publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: YandexClusterConfiguration
# pre-defined layout from Deckhouse
layout: <layout>
# Yandex account parameters
provider:
# the cloud ID
cloudID: *!CHANGE_CloudID*
# the folder ID
folderID: *!CHANGE_FolderID*
# a JSON key (formatted as a single line!) generated by `yc iam key create`
# and then processed with `cat deckhouse-sa-key.json | jq -c`
serviceAccountJSON: *!CHANGE_ServiceAccountJSON*
masterNodeGroup:
# number of replicas
# if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
replicas: 1
# Parameters of the VM image
instanceClass:
# CPU cores number
cores: 4
# RAM in MB
memory: 8192
# Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image;
# to get one you can use this one-liner:
# yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id'
# you might consider changing this
imageID: fd8firhksp7daa6msfes
# a list of IPs that will be assigned to masters; Auto means assign automatically
externalIPAddresses:
- "Auto"
# this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones
nodeNetworkCIDR: "10.241.32.0/20"
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
Deckhouse Platform Enterprise Edition license key
The license key is used by Deckhouse components to access the geo-distributed container registry, where all images used by the Deckhouse are stored.
The commands and configuration files on this page are generated using the license key you entered.
Request access
Fill out this form and we will send you access credentials via email.
Enter license key
The recommended settings for a Deckhouse Platform Enterprise Edition installation are generated below:
config.yml
— a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.resources.yml
— description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).
Please pay attention to:
- highlighted parameters you must define.
- parameters you might want to change.
The other available cloud provider related options are described in the documentation.
To learn more about the Deckhouse Platform release channels, please see the relevant documentation.
# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
# type of the cloud provider
provider: Yandex
# prefix to differentiate cluster objects (can be used, e.g., in routing)
prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.21"
# cluster domain (used for local addressing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
# address of the Docker registry where the Deckhouse images are located
imagesRepo: registry.deckhouse.io/deckhouse/ee
# a special string with your token to access Docker registry (generated automatically for your license token)
registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
# the release channel in use
releaseChannel: Stable
configOverrides:
global:
modules:
# template that will be used for system apps domains within the cluster
# e.g., Grafana for %s.example.com will be available as grafana.example.com
publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: YandexClusterConfiguration
# pre-defined layout from Deckhouse
layout: <layout>
# Yandex account parameters
provider:
# the cloud ID
cloudID: *!CHANGE_CloudID*
# the folder ID
folderID: *!CHANGE_FolderID*
# a JSON key (formatted as a single line!) generated by `yc iam key create`
# and then processed with `cat deckhouse-sa-key.json | jq -c`
serviceAccountJSON: *!CHANGE_ServiceAccountJSON*
masterNodeGroup:
# number of replicas
# if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
replicas: 1
# Parameters of the VM image
instanceClass:
# CPU cores number
cores: 4
# RAM in MB
memory: 8192
# Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image;
# to get one you can use this one-liner:
# yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id'
# you might consider changing this
imageID: fd8firhksp7daa6msfes
externalIPAddresses:
- "Auto"
# this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones
nodeNetworkCIDR: "10.241.32.0/20"
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
Deckhouse Platform Enterprise Edition license key
The license key is used by Deckhouse components to access the geo-distributed container registry, where all images used by the Deckhouse are stored.
The commands and configuration files on this page are generated using the license key you entered.
Request access
Fill out this form and we will send you access credentials via email.
Enter license key
The recommended settings for a Deckhouse Platform Enterprise Edition installation are generated below:
config.yml
— a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.resources.yml
— description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).
Please pay attention to:
- highlighted parameters you must define.
- parameters you might want to change.
The other available cloud provider related options are described in the documentation.
To learn more about the Deckhouse Platform release channels, please see the relevant documentation.
# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
# type of the cloud provider
provider: Yandex
# prefix to differentiate cluster objects (can be used, e.g., in routing)
prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.21"
# cluster domain (used for local addressing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
# address of the Docker registry where the Deckhouse images are located
imagesRepo: registry.deckhouse.io/deckhouse/ee
# a special string with your token to access Docker registry (generated automatically for your license token)
registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
# the release channel in use
releaseChannel: Stable
configOverrides:
global:
modules:
# template that will be used for system apps domains within the cluster
# e.g., Grafana for %s.example.com will be available as grafana.example.com
publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: YandexClusterConfiguration
# pre-defined layout from Deckhouse
layout: <layout>
# special parameters for WithNATInstance layout
withNATInstance: {}
# Yandex account parameters
provider:
# the cloud ID
cloudID: *!CHANGE_CloudID*
# the folder ID
folderID: *!CHANGE_FolderID*
# a JSON key (formatted as a single line!) generated by `yc iam key create`
# and then processed with `cat deckhouse-sa-key.json | jq -c`
serviceAccountJSON: *!CHANGE_ServiceAccountJSON*
masterNodeGroup:
# number of replicas
# if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
replicas: 1
# Parameters of the VM image
instanceClass:
# CPU cores number
cores: 4
# RAM in MB
memory: 8192
# Yandex.Cloud image ID. It is recommended to use the most fresh Ubuntu 20.04 LTS image;
# to get one you can use this one-liner:
# yc compute image list --folder-id standard-images --format json | jq -r '[.[] | select(.family == "ubuntu-2004-lts")] | sort_by(.created_at)[-1].id'
# you might consider changing this
imageID: fd8firhksp7daa6msfes
externalIPAddresses:
- "Auto"
# this subnet will be split into three equal parts; they will serve as a basis for subnets in three Yandex.Cloud zones
nodeNetworkCIDR: "10.241.32.0/20"
# public SSH key for accessing cloud nodes
sshPublicKey: ssh-rsa <SSH_PUBLIC_KEY>
Resources for the “Minimal” preset.
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
cloudInstances:
classReference:
kind: YandexInstanceClass
name: worker
maxPerZone: 1
minPerZone: 1
# you might consider changing this
zones:
- ru-central1-a
disruptions:
approvalMode: Automatic
nodeTemplate:
labels:
node.deckhouse.io/group: worker
nodeType: CloudEphemeral
---
apiVersion: deckhouse.io/v1
kind: YandexInstanceClass
metadata:
name: worker
spec:
# you might consider changing this
cores: 4
# you might consider changing this
memory: 8192
# you might consider changing this
diskSizeGB: 30
---
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: nginx
spec:
ingressClass: nginx
inlet: LoadBalancer
# describes on which nodes the component will be located. Label node.deckhouse.io/group: <NAME_GROUP_NAME> is set automatically.
nodeSelector:
node.deckhouse.io/group: worker
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
name: admin
spec:
# Kubernetes RBAC accounts list
subjects:
- kind: User
name: admin@example.com
# pre-defined access template
accessLevel: SuperAdmin
# allow user to do kubectl port-forward
portForwarding: true
---
apiVersion: deckhouse.io/v1
kind: User
metadata:
name: admin
spec:
email: admin@example.com
# this is a hash of the password <GENERATED_PASSWORD>, generated now
# generate your own or use it at your own risk (for testing purposes)
# echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2
# you might consider changing this
password: <GENERATED_PASSWORD_HASH>
Resources for the “Multi-master” preset.
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
cloudInstances:
classReference:
kind: YandexInstanceClass
name: worker
maxPerZone: 1
minPerZone: 1
# you might consider changing this
zones:
- ru-central1-a
- ru-central1-b
disruptions:
approvalMode: Automatic
nodeType: CloudEphemeral
---
apiVersion: deckhouse.io/v1
kind: YandexInstanceClass
metadata:
name: worker
spec:
# you might consider changing this
cores: 4
# you might consider changing this
memory: 8192
# you might consider changing this
diskSizeGB: 30
---
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: nginx
spec:
ingressClass: nginx
inlet: LoadBalancer
# describes on which nodes the component will be located. Label node.deckhouse.io/group: <NAME_GROUP_NAME> is set automatically.
nodeSelector:
node.deckhouse.io/group: worker
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
name: admin
spec:
# Kubernetes RBAC accounts list
subjects:
- kind: User
name: admin@example.com
# pre-defined access template
accessLevel: SuperAdmin
# allow user to do kubectl port-forward
portForwarding: true
---
apiVersion: deckhouse.io/v1
kind: User
metadata:
name: admin
spec:
email: admin@example.com
# this is a hash of the password <GENERATED_PASSWORD>, generated now
# generate your own or use it at your own risk (for testing purposes)
# echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2
# you might consider changing this
password: <GENERATED_PASSWORD_HASH>
Resources for the “Recommended for production” preset.
# section containing the parameters of system node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: system
spec:
# parameters for provisioning the cloud-based VMs
cloudInstances:
# the reference to the InstanceClass object
classReference:
kind: YandexInstanceClass
name: system
# the maximum number of instances for the group in each zone
maxPerZone: 1
# the minimum number of instances for the group in each zone
minPerZone: 1
# list of availability zones to create instances in
# you might consider changing this
zones:
- ru-central1-a
- ru-central1-b
disruptions:
approvalMode: Automatic
nodeTemplate:
labels:
node-role.deckhouse.io/system: ""
taints:
- effect: NoExecute
key: dedicated.deckhouse.io
value: system
nodeType: CloudEphemeral
---
# section containing the parameters of instance class for system nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: YandexInstanceClass
metadata:
# name of instance class
name: system
spec:
# you might consider changing this
cores: 4
# you might consider changing this
memory: 8192
# you might consider changing this
diskSizeGB: 30
---
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: frontend
spec:
cloudInstances:
classReference:
kind: YandexInstanceClass
name: frontend
# the maximum number of instances for the group in each zone
maxPerZone: 2
# the minimum number of instances for the group in each zone
minPerZone: 1
disruptions:
approvalMode: Automatic
nodeTemplate:
labels:
node-role.deckhouse.io/frontend: ""
taints:
- effect: NoExecute
key: dedicated.deckhouse.io
value: frontend
nodeType: CloudEphemeral
---
# section containing the parameters of instance class for frontend nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: YandexInstanceClass
metadata:
# name of instance class
name: frontend
# you might consider changing this
spec:
cores: 2
memory: 4096
diskSizeGB: 30
---
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
cloudInstances:
classReference:
kind: YandexInstanceClass
name: worker
maxPerZone: 1
minPerZone: 1
# you might consider changing this
zones:
- ru-central1-c
disruptions:
approvalMode: Automatic
nodeType: CloudEphemeral
---
# section containing the parameters of instance class for worker nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: YandexInstanceClass
metadata:
name: worker
# you might consider changing this
spec:
cores: 4
memory: 8192
diskSizeGB: 30
---
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: nginx
spec:
ingressClass: nginx
inlet: LoadBalancer
nodeSelector:
node-role.deckhouse.io/frontend: ""
maxReplicas: 3
minReplicas: 2
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
name: admin
spec:
# Kubernetes RBAC accounts list
subjects:
- kind: User
name: admin@example.com
# pre-defined access template
accessLevel: SuperAdmin
# allow user to do kubectl port-forward
portForwarding: true
---
apiVersion: deckhouse.io/v1
kind: User
metadata:
name: admin
spec:
email: admin@example.com
# this is a hash of the password <GENERATED_PASSWORD>, generated now
# generate your own or use it at your own risk (for testing purposes)
# echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2
# you might consider changing this
password: <GENERATED_PASSWORD_HASH>
Use a Docker image to install the Deckhouse Platform. It is necessary to transfer configuration files to the container as well as SSH keys for accessing the master nodes.
Run the installer on the personal computer.
docker run --pull=always -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \
-v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ce/install:stable bash
docker run --pull=always -it -v "%cd%\config.yml:/config.yml" -v "%userprofile%\.ssh\:/tmp/.ssh/" -v "%cd%\resources.yml:/resources.yml" -v "%cd%\dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ce/install:stable bash -c "chmod 400 /tmp/.ssh/id_rsa; bash"
Now, to initiate the process of installation, you need to execute inside the container:
dhctl bootstrap --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml --resources=/resources.yml
The --ssh-user
parameter here refers to the default user for the relevant VM image. It is ubuntu
for the images suggested in this guide.
Notes:
-
The
-v "$PWD/dhctl-tmp:/tmp/dhctl"
parameter enables saving the state of the Terraform installer to a temporary directory on the startup host. It allows the installation to continue correctly in case of a failure of the installer’s container. If any problems occur, you can cancel the process of installation and remove all created objects using the following command (the configuration file should be the same you’ve used to initiate the installation):
dhctl bootstrap-phase abort --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml
dhctl bootstrap-phase abort --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml
After the installation is complete, you will be returned to the command line.
Almost everything is ready for a fully-fledged Deckhouse Platform to work!
Use a Docker image to install the Deckhouse Platform. It is necessary to transfer configuration files to the container as well as SSH keys for accessing the master nodes.
Run the installer on the personal computer.
echo <LICENSE_TOKEN> | docker login -u license-token --password-stdin registry.deckhouse.io
docker run --pull=always -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \
-v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash
Log in on the personal computer to the container image registry by providing the license key as a password:
docker login -u license-token registry.deckhouse.io
Run a container with the installer:
docker run --pull=always -it -v "%cd%\config.yml:/config.yml" -v "%userprofile%\.ssh\:/tmp/.ssh/" -v "%cd%\resources.yml:/resources.yml" -v "%cd%\dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash -c "chmod 400 /tmp/.ssh/id_rsa; bash"
Now, to initiate the process of installation, you need to execute inside the container:
dhctl bootstrap --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml --resources=/resources.yml
The --ssh-user
parameter here refers to the default user for the relevant VM image. It is ubuntu
for the images suggested in this guide.
Notes:
-
The
-v "$PWD/dhctl-tmp:/tmp/dhctl"
parameter enables saving the state of the Terraform installer to a temporary directory on the startup host. It allows the installation to continue correctly in case of a failure of the installer’s container. If any problems occur, you can cancel the process of installation and remove all created objects using the following command (the configuration file should be the same you’ve used to initiate the installation):
dhctl bootstrap-phase abort --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml
dhctl bootstrap-phase abort --ssh-user=ubuntu --ssh-agent-private-keys=/tmp/.ssh/id_rsa --config=/config.yml
After the installation is complete, you will be returned to the command line.
Almost everything is ready for a fully-fledged Deckhouse Platform to work!