Deckhouse Platform in Google Cloud

Select the Deckhouse Platform revision

The recommended settings for a Deckhouse Platform Community Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

The other available cloud provider related options are described in the documentation.

To learn more about the Deckhouse Platform release channels, please see the relevant documentation.

# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
  # type of the cloud provider
  provider: GCP
  # prefix to differentiate cluster objects (can be used, e.g., in routing)
  prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.19"
# cluster domain (used for local routing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
  # the release channel in use
  releaseChannel: Stable
  configOverrides:
    global:
      modules:
        # template that will be used for system apps domains within the cluster
        # e.g., Grafana for %s.example.com will be available as grafana.example.com
        publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: GCPClusterConfiguration
# pre-defined layout from Deckhouse
layout: WithoutNAT
# GCP access parameters
provider:
  # Example of serviceAccountJSON:
  # serviceAccountJSON: |
  #     {
  #      "type": "service_account",
  #      "project_id": "somproject-sandbox",
  #      "private_key_id": "***",
  #      "private_key": "***",
  #      "client_email": "mail@somemail.com",
  #      "client_id": "<client_id>",
  #      "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  #      "token_uri": "https://oauth2.googleapis.com/token",
  #      "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  #      "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/somproject-sandbox.gserviceaccount.com"
  #    }
  serviceAccountJSON: *!CHANGE_SA_JSON*
  # cluster region
  # you might consider changing this
  region: europe-west3
# list of labels to attach to cluster resources.
labels:
  kube: example
# parameters of the master node group
masterNodeGroup:
  # number of replicas
  # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
  replicas: 1
  # Parameters of the VM image
  instanceClass:
    # type of the instance
    # you might consider changing this
    machineType: n1-standard-4
    # Image id
    # you might consider changing this
    image: projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20190911
    # disable public IP assignement for the cluster
    disableExternalIP: false
# a subnet to use for cluster nodes
subnetworkCIDR: 10.0.0.0/24
# public SSH key for accessing cloud nodes
sshKey: ssh-rsa <SSH_PUBLIC_KEY>
# general cluster parameters (ClusterConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: ClusterConfiguration # type of the infrastructure: bare metal (Static) or Cloud (Cloud) clusterType: Cloud # cloud provider-related settings cloud: # type of the cloud provider provider: GCP # prefix to differentiate cluster objects (can be used, e.g., in routing) prefix: "cloud-demo" # address space of the cluster's Pods podSubnetCIDR: 10.111.0.0/16 # address space of the cluster's services serviceSubnetCIDR: 10.222.0.0/16 # Kubernetes version to install kubernetesVersion: "1.19" # cluster domain (used for local routing) clusterDomain: "cluster.local" --- # section for bootstrapping the Deckhouse cluster (InitConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: InitConfiguration # Deckhouse parameters deckhouse: # the release channel in use releaseChannel: Stable configOverrides: global: modules: # template that will be used for system apps domains within the cluster # e.g., Grafana for %s.example.com will be available as grafana.example.com publicDomainTemplate: "%s.example.com" --- # section containing the parameters of the cloud provider # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: GCPClusterConfiguration # pre-defined layout from Deckhouse layout: WithoutNAT # GCP access parameters provider: # Example of serviceAccountJSON: # serviceAccountJSON: | # { # "type": "service_account", # "project_id": "somproject-sandbox", # "private_key_id": "***", # "private_key": "***", # "client_email": "mail@somemail.com", # "client_id": "<client_id>", # "auth_uri": "https://accounts.google.com/o/oauth2/auth", # "token_uri": "https://oauth2.googleapis.com/token", # "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", # "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/somproject-sandbox.gserviceaccount.com" # } serviceAccountJSON: *!CHANGE_SA_JSON* # cluster region # you might consider changing this region: europe-west3 # list of labels to attach to cluster resources. labels: kube: example # parameters of the master node group masterNodeGroup: # number of replicas # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes replicas: 1 # Parameters of the VM image instanceClass: # type of the instance # you might consider changing this machineType: n1-standard-4 # Image id # you might consider changing this image: projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20190911 # disable public IP assignement for the cluster disableExternalIP: false # a subnet to use for cluster nodes subnetworkCIDR: 10.0.0.0/24 # public SSH key for accessing cloud nodes sshKey: ssh-rsa <SSH_PUBLIC_KEY>

The recommended settings for a Deckhouse Platform Community Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

The other available cloud provider related options are described in the documentation.

To learn more about the Deckhouse Platform release channels, please see the relevant documentation.

# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
  # type of the cloud provider
  provider: GCP
  # prefix to differentiate cluster objects (can be used, e.g., in routing)
  prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.19"
# cluster domain (used for local routing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
  # the release channel in use
  releaseChannel: Stable
  configOverrides:
    global:
      modules:
        # template that will be used for system apps domains within the cluster
        # e.g., Grafana for %s.example.com will be available as grafana.example.com
        publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: GCPClusterConfiguration
# pre-defined layout from Deckhouse
layout: Standard
standard:
  # list of static public IP-addresses for `Cloud NAT`
  # you might consider changing this
  cloudNATAddresses: []
# GCP access parameters
provider:
  # Example of serviceAccountJSON:
  # serviceAccountJSON: |
  #     {
  #      "type": "service_account",
  #      "project_id": "somproject-sandbox",
  #      "private_key_id": "***",
  #      "private_key": "***",
  #      "client_email": "mail@somemail.com",
  #      "client_id": "<client_id>",
  #      "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  #      "token_uri": "https://oauth2.googleapis.com/token",
  #      "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  #      "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/somproject-sandbox.gserviceaccount.com"
  #    }
  serviceAccountJSON: *!CHANGE_SA_JSON*
  # cluster region
  # you might consider changing this
  region: europe-west3
# list of labels to attach to cluster resources.
labels:
  kube: example
# parameters of the master node group
masterNodeGroup:
  # number of replicas
  # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
  replicas: 1
  # Parameters of the VM image
  instanceClass:
    # type of the instance
    # you might consider changing this
    machineType: n1-standard-4
    # Image id
    # you might consider changing this
    image: projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20190911
    # disable public IP assignement for the cluster
    disableExternalIP: false
# a subnet to use for cluster nodes
subnetworkCIDR: 10.0.0.0/24
# public SSH key for accessing cloud nodes
sshKey: ssh-rsa <SSH_PUBLIC_KEY>
# general cluster parameters (ClusterConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: ClusterConfiguration # type of the infrastructure: bare metal (Static) or Cloud (Cloud) clusterType: Cloud # cloud provider-related settings cloud: # type of the cloud provider provider: GCP # prefix to differentiate cluster objects (can be used, e.g., in routing) prefix: "cloud-demo" # address space of the cluster's Pods podSubnetCIDR: 10.111.0.0/16 # address space of the cluster's services serviceSubnetCIDR: 10.222.0.0/16 # Kubernetes version to install kubernetesVersion: "1.19" # cluster domain (used for local routing) clusterDomain: "cluster.local" --- # section for bootstrapping the Deckhouse cluster (InitConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: InitConfiguration # Deckhouse parameters deckhouse: # the release channel in use releaseChannel: Stable configOverrides: global: modules: # template that will be used for system apps domains within the cluster # e.g., Grafana for %s.example.com will be available as grafana.example.com publicDomainTemplate: "%s.example.com" --- # section containing the parameters of the cloud provider # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: GCPClusterConfiguration # pre-defined layout from Deckhouse layout: Standard standard: # list of static public IP-addresses for `Cloud NAT` # you might consider changing this cloudNATAddresses: [] # GCP access parameters provider: # Example of serviceAccountJSON: # serviceAccountJSON: | # { # "type": "service_account", # "project_id": "somproject-sandbox", # "private_key_id": "***", # "private_key": "***", # "client_email": "mail@somemail.com", # "client_id": "<client_id>", # "auth_uri": "https://accounts.google.com/o/oauth2/auth", # "token_uri": "https://oauth2.googleapis.com/token", # "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", # "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/somproject-sandbox.gserviceaccount.com" # } serviceAccountJSON: *!CHANGE_SA_JSON* # cluster region # you might consider changing this region: europe-west3 # list of labels to attach to cluster resources. labels: kube: example # parameters of the master node group masterNodeGroup: # number of replicas # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes replicas: 1 # Parameters of the VM image instanceClass: # type of the instance # you might consider changing this machineType: n1-standard-4 # Image id # you might consider changing this image: projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20190911 # disable public IP assignement for the cluster disableExternalIP: false # a subnet to use for cluster nodes subnetworkCIDR: 10.0.0.0/24 # public SSH key for accessing cloud nodes sshKey: ssh-rsa <SSH_PUBLIC_KEY>

Deckhouse Platform Enterprise Edition license key

The license key is used by Deckhouse components to access the geo-distributed container registry, where all images used by the Deckhouse are stored.

The commands and configuration files on this page are generated using the license key you entered.

Request access

Fill out this form and we will send you access credentials via email.

Enter license key

The recommended settings for a Deckhouse Platform Enterprise Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

The other available cloud provider related options are described in the documentation.

To learn more about the Deckhouse Platform release channels, please see the relevant documentation.

# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
  # type of the cloud provider
  provider: GCP
  # prefix to differentiate cluster objects (can be used, e.g., in routing)
  prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.19"
# cluster domain (used for local routing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
  # address of the Docker registry where the Deckhouse images are located
  imagesRepo: registry.deckhouse.io/deckhouse/ee
  # a special string with your token to access Docker registry (generated automatically for your license token)
  registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
  # the release channel in use
  releaseChannel: Stable
  configOverrides:
    global:
      modules:
        # template that will be used for system apps domains within the cluster
        # e.g., Grafana for %s.example.com will be available as grafana.example.com
        publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: GCPClusterConfiguration
# pre-defined layout from Deckhouse
layout: WithoutNAT
# GCP access parameters
provider:
  # Example of serviceAccountJSON:
  # serviceAccountJSON: |
  #     {
  #      "type": "service_account",
  #      "project_id": "somproject-sandbox",
  #      "private_key_id": "***",
  #      "private_key": "***",
  #      "client_email": "mail@somemail.com",
  #      "client_id": "<client_id>",
  #      "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  #      "token_uri": "https://oauth2.googleapis.com/token",
  #      "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  #      "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/somproject-sandbox.gserviceaccount.com"
  #    }
  serviceAccountJSON: *!CHANGE_SA_JSON*
  # cluster region
  # you might consider changing this
  region: europe-west3
# list of labels to attach to cluster resources.
labels:
  kube: example
# parameters of the master node group
masterNodeGroup:
  # number of replicas
  # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
  replicas: 1
  # Parameters of the VM image
  instanceClass:
    # type of the instance
    # you might consider changing this
    machineType: n1-standard-4
    # Image id
    # you might consider changing this
    image: projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20190911
    # disable public IP assignement for the cluster
    disableExternalIP: false
# a subnet to use for cluster nodes
subnetworkCIDR: 10.0.0.0/24
# public SSH key for accessing cloud nodes
sshKey: ssh-rsa <SSH_PUBLIC_KEY>
# general cluster parameters (ClusterConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: ClusterConfiguration # type of the infrastructure: bare metal (Static) or Cloud (Cloud) clusterType: Cloud # cloud provider-related settings cloud: # type of the cloud provider provider: GCP # prefix to differentiate cluster objects (can be used, e.g., in routing) prefix: "cloud-demo" # address space of the cluster's Pods podSubnetCIDR: 10.111.0.0/16 # address space of the cluster's services serviceSubnetCIDR: 10.222.0.0/16 # Kubernetes version to install kubernetesVersion: "1.19" # cluster domain (used for local routing) clusterDomain: "cluster.local" --- # section for bootstrapping the Deckhouse cluster (InitConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: InitConfiguration # Deckhouse parameters deckhouse: # address of the Docker registry where the Deckhouse images are located imagesRepo: registry.deckhouse.io/deckhouse/ee # a special string with your token to access Docker registry (generated automatically for your license token) registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE> # the release channel in use releaseChannel: Stable configOverrides: global: modules: # template that will be used for system apps domains within the cluster # e.g., Grafana for %s.example.com will be available as grafana.example.com publicDomainTemplate: "%s.example.com" --- # section containing the parameters of the cloud provider # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: GCPClusterConfiguration # pre-defined layout from Deckhouse layout: WithoutNAT # GCP access parameters provider: # Example of serviceAccountJSON: # serviceAccountJSON: | # { # "type": "service_account", # "project_id": "somproject-sandbox", # "private_key_id": "***", # "private_key": "***", # "client_email": "mail@somemail.com", # "client_id": "<client_id>", # "auth_uri": "https://accounts.google.com/o/oauth2/auth", # "token_uri": "https://oauth2.googleapis.com/token", # "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", # "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/somproject-sandbox.gserviceaccount.com" # } serviceAccountJSON: *!CHANGE_SA_JSON* # cluster region # you might consider changing this region: europe-west3 # list of labels to attach to cluster resources. labels: kube: example # parameters of the master node group masterNodeGroup: # number of replicas # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes replicas: 1 # Parameters of the VM image instanceClass: # type of the instance # you might consider changing this machineType: n1-standard-4 # Image id # you might consider changing this image: projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20190911 # disable public IP assignement for the cluster disableExternalIP: false # a subnet to use for cluster nodes subnetworkCIDR: 10.0.0.0/24 # public SSH key for accessing cloud nodes sshKey: ssh-rsa <SSH_PUBLIC_KEY>

Deckhouse Platform Enterprise Edition license key

The license key is used by Deckhouse components to access the geo-distributed container registry, where all images used by the Deckhouse are stored.

The commands and configuration files on this page are generated using the license key you entered.

Request access

Fill out this form and we will send you access credentials via email.

Enter license key

The recommended settings for a Deckhouse Platform Enterprise Edition installation are generated below:

  • config.yml — a file with the configuration needed to bootstrap the cluster. Contains the installer parameters, cloud provider related parameters (such as credentials, instance type, etc), and the initial cluster parameters.
  • resources.yml — description of the resources that must be installed after the installation (nodes description, Ingress controller description, etc).

Please pay attention to:

  • highlighted parameters you must define.
  • parameters you might want to change.

The other available cloud provider related options are described in the documentation.

To learn more about the Deckhouse Platform release channels, please see the relevant documentation.

# general cluster parameters (ClusterConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: ClusterConfiguration
# type of the infrastructure: bare metal (Static) or Cloud (Cloud)
clusterType: Cloud
# cloud provider-related settings
cloud:
  # type of the cloud provider
  provider: GCP
  # prefix to differentiate cluster objects (can be used, e.g., in routing)
  prefix: "cloud-demo"
# address space of the cluster's Pods
podSubnetCIDR: 10.111.0.0/16
# address space of the cluster's services
serviceSubnetCIDR: 10.222.0.0/16
# Kubernetes version to install
kubernetesVersion: "1.19"
# cluster domain (used for local routing)
clusterDomain: "cluster.local"
---
# section for bootstrapping the Deckhouse cluster (InitConfiguration)
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: InitConfiguration
# Deckhouse parameters
deckhouse:
  # address of the Docker registry where the Deckhouse images are located
  imagesRepo: registry.deckhouse.io/deckhouse/ee
  # a special string with your token to access Docker registry (generated automatically for your license token)
  registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE>
  # the release channel in use
  releaseChannel: Stable
  configOverrides:
    global:
      modules:
        # template that will be used for system apps domains within the cluster
        # e.g., Grafana for %s.example.com will be available as grafana.example.com
        publicDomainTemplate: "%s.example.com"
---
# section containing the parameters of the cloud provider
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
# type of the configuration section
kind: GCPClusterConfiguration
# pre-defined layout from Deckhouse
layout: Standard
standard:
  # list of static public IP-addresses for `Cloud NAT`
  # you might consider changing this
  cloudNATAddresses: []
# GCP access parameters
provider:
  # Example of serviceAccountJSON:
  # serviceAccountJSON: |
  #     {
  #      "type": "service_account",
  #      "project_id": "somproject-sandbox",
  #      "private_key_id": "***",
  #      "private_key": "***",
  #      "client_email": "mail@somemail.com",
  #      "client_id": "<client_id>",
  #      "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  #      "token_uri": "https://oauth2.googleapis.com/token",
  #      "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  #      "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/somproject-sandbox.gserviceaccount.com"
  #    }
  serviceAccountJSON: *!CHANGE_SA_JSON*
  # cluster region
  # you might consider changing this
  region: europe-west3
# list of labels to attach to cluster resources.
labels:
  kube: example
# parameters of the master node group
masterNodeGroup:
  # number of replicas
  # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes
  replicas: 1
  # Parameters of the VM image
  instanceClass:
    # type of the instance
    # you might consider changing this
    machineType: n1-standard-4
    # Image id
    # you might consider changing this
    image: projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20190911
    # disable public IP assignement for the cluster
    disableExternalIP: false
# a subnet to use for cluster nodes
subnetworkCIDR: 10.0.0.0/24
# public SSH key for accessing cloud nodes
sshKey: ssh-rsa <SSH_PUBLIC_KEY>
# general cluster parameters (ClusterConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: ClusterConfiguration # type of the infrastructure: bare metal (Static) or Cloud (Cloud) clusterType: Cloud # cloud provider-related settings cloud: # type of the cloud provider provider: GCP # prefix to differentiate cluster objects (can be used, e.g., in routing) prefix: "cloud-demo" # address space of the cluster's Pods podSubnetCIDR: 10.111.0.0/16 # address space of the cluster's services serviceSubnetCIDR: 10.222.0.0/16 # Kubernetes version to install kubernetesVersion: "1.19" # cluster domain (used for local routing) clusterDomain: "cluster.local" --- # section for bootstrapping the Deckhouse cluster (InitConfiguration) # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: InitConfiguration # Deckhouse parameters deckhouse: # address of the Docker registry where the Deckhouse images are located imagesRepo: registry.deckhouse.io/deckhouse/ee # a special string with your token to access Docker registry (generated automatically for your license token) registryDockerCfg: <YOUR_ACCESS_STRING_IS_HERE> # the release channel in use releaseChannel: Stable configOverrides: global: modules: # template that will be used for system apps domains within the cluster # e.g., Grafana for %s.example.com will be available as grafana.example.com publicDomainTemplate: "%s.example.com" --- # section containing the parameters of the cloud provider # version of the Deckhouse API apiVersion: deckhouse.io/v1 # type of the configuration section kind: GCPClusterConfiguration # pre-defined layout from Deckhouse layout: Standard standard: # list of static public IP-addresses for `Cloud NAT` # you might consider changing this cloudNATAddresses: [] # GCP access parameters provider: # Example of serviceAccountJSON: # serviceAccountJSON: | # { # "type": "service_account", # "project_id": "somproject-sandbox", # "private_key_id": "***", # "private_key": "***", # "client_email": "mail@somemail.com", # "client_id": "<client_id>", # "auth_uri": "https://accounts.google.com/o/oauth2/auth", # "token_uri": "https://oauth2.googleapis.com/token", # "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", # "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/somproject-sandbox.gserviceaccount.com" # } serviceAccountJSON: *!CHANGE_SA_JSON* # cluster region # you might consider changing this region: europe-west3 # list of labels to attach to cluster resources. labels: kube: example # parameters of the master node group masterNodeGroup: # number of replicas # if more than 1 master node exists, control-plane will be automatically deployed on all master nodes replicas: 1 # Parameters of the VM image instanceClass: # type of the instance # you might consider changing this machineType: n1-standard-4 # Image id # you might consider changing this image: projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20190911 # disable public IP assignement for the cluster disableExternalIP: false # a subnet to use for cluster nodes subnetworkCIDR: 10.0.0.0/24 # public SSH key for accessing cloud nodes sshKey: ssh-rsa <SSH_PUBLIC_KEY>

Resources for the “Minimal” preset.

---
# section containing the parameters of instance class for worker nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: GCPInstanceClass
metadata:
  # name of instance class
  name: worker
spec:
  diskSizeGb: 40
  # Machine type in use for this instance class
  # you might consider changing this
  machineType: n2-standard-4
---
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: worker
spec:
  cloudInstances:
    classReference:
      kind: GCPInstanceClass
      name: worker
    # the minimum number of instances for the group in each zone
    minPerZone: 1
    # the maximum number of instances for the group in each zone
    maxPerZone: 1
    # you might consider changing this
    zones:
    - europe-west3-a
  disruptions:
    approvalMode: Automatic
  nodeTemplate:
    # similar to the standard metadata.labels field
    labels:
      node-role.deckhouse.io/worker: ""
  nodeType: CloudEphemeral
---
# section containing the parameters of nginx ingress controller
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
  name: nginx
spec:
  # the name of the Ingress class to use with the Ingress nginx controller
  ingressClass: nginx
  # the way traffic goes to cluster from the outer network
  inlet: LoadBalancer
  # describes on which nodes the component will be located
  nodeSelector:
    node-role.deckhouse.io/worker: ""
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
  name: admin
spec:
  # Kubernetes RBAC accounts list
  subjects:
  - kind: User
    name: admin@example.com
  # pre-defined access template
  accessLevel: SuperAdmin
  # allow user to do kubectl port-forward
  portForwarding: true
---
# section containing the parameters of the static user
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: User
metadata:
  name: admin
spec:
  # user e-mail
  email: admin@example.com
  # this is a hash for generated password: <GENERATED_PASSWORD>
  # you might consider changing this
  password: <GENERATED_PASSWORD_HASH>
--- # section containing the parameters of instance class for worker nodes # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: GCPInstanceClass metadata: # name of instance class name: worker spec: diskSizeGb: 40 # Machine type in use for this instance class # you might consider changing this machineType: n2-standard-4 --- apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: cloudInstances: classReference: kind: GCPInstanceClass name: worker # the minimum number of instances for the group in each zone minPerZone: 1 # the maximum number of instances for the group in each zone maxPerZone: 1 # you might consider changing this zones: - europe-west3-a disruptions: approvalMode: Automatic nodeTemplate: # similar to the standard metadata.labels field labels: node-role.deckhouse.io/worker: "" nodeType: CloudEphemeral --- # section containing the parameters of nginx ingress controller # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: # the name of the Ingress class to use with the Ingress nginx controller ingressClass: nginx # the way traffic goes to cluster from the outer network inlet: LoadBalancer # describes on which nodes the component will be located nodeSelector: node-role.deckhouse.io/worker: "" --- apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: # Kubernetes RBAC accounts list subjects: - kind: User name: admin@example.com # pre-defined access template accessLevel: SuperAdmin # allow user to do kubectl port-forward portForwarding: true --- # section containing the parameters of the static user # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: # user e-mail email: admin@example.com # this is a hash for generated password: <GENERATED_PASSWORD> # you might consider changing this password: <GENERATED_PASSWORD_HASH>

Resources for the “Multi-master” preset.

---
# section containing the parameters of instance class for worker nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: GCPInstanceClass
metadata:
  # name of instance class
  name: worker
spec:
  diskSizeGb: 40
  # Machine type in use for this instance class
  # you might consider changing this
  machineType: n2-standard-4
---
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: worker
spec:
  cloudInstances:
    classReference:
      kind: GCPInstanceClass
      name: worker
    # the minimum number of instances for the group in each zone
    minPerZone: 2
    # the maximum number of instances for the group in each zone
    maxPerZone: 2
    # you might consider changing this
    zones:
    - europe-west3-a
  disruptions:
    approvalMode: Automatic
  nodeTemplate:
    # similar to the standard metadata.labels field
    labels:
      node-role.deckhouse.io/worker: ""
  nodeType: CloudEphemeral
---
# section containing the parameters of nginx ingress controller
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
  name: nginx
spec:
  # the name of the Ingress class to use with the Ingress nginx controller
  ingressClass: nginx
  # the way traffic goes to cluster from the outer network
  inlet: LoadBalancer
  # describes on which nodes the component will be located
  nodeSelector:
    node-role.deckhouse.io/worker: ""
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
  name: admin
spec:
  # Kubernetes RBAC accounts list
  subjects:
  - kind: User
    name: admin@example.com
  # pre-defined access template
  accessLevel: SuperAdmin
  # allow user to do kubectl port-forward
  portForwarding: true
---
# section containing the parameters of the static user
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: User
metadata:
  name: admin
spec:
  # user e-mail
  email: admin@example.com
  # this is a hash for generated password: <GENERATED_PASSWORD>
  # you might consider changing this
  password: <GENERATED_PASSWORD_HASH>
--- # section containing the parameters of instance class for worker nodes # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: GCPInstanceClass metadata: # name of instance class name: worker spec: diskSizeGb: 40 # Machine type in use for this instance class # you might consider changing this machineType: n2-standard-4 --- apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: cloudInstances: classReference: kind: GCPInstanceClass name: worker # the minimum number of instances for the group in each zone minPerZone: 2 # the maximum number of instances for the group in each zone maxPerZone: 2 # you might consider changing this zones: - europe-west3-a disruptions: approvalMode: Automatic nodeTemplate: # similar to the standard metadata.labels field labels: node-role.deckhouse.io/worker: "" nodeType: CloudEphemeral --- # section containing the parameters of nginx ingress controller # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: # the name of the Ingress class to use with the Ingress nginx controller ingressClass: nginx # the way traffic goes to cluster from the outer network inlet: LoadBalancer # describes on which nodes the component will be located nodeSelector: node-role.deckhouse.io/worker: "" --- apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: # Kubernetes RBAC accounts list subjects: - kind: User name: admin@example.com # pre-defined access template accessLevel: SuperAdmin # allow user to do kubectl port-forward portForwarding: true --- # section containing the parameters of the static user # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: # user e-mail email: admin@example.com # this is a hash for generated password: <GENERATED_PASSWORD> # you might consider changing this password: <GENERATED_PASSWORD_HASH>

Resources for the “Recommended for production” preset.

# section containing the parameters of instance class for system nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: GCPInstanceClass
metadata:
  # name of instance class
  name: system
spec:
  diskSizeGb: 40
  # Machine type in use for this instance class
  # you might consider changing this
  machineType: n2-standard-4
---
# section containing the parameters of system node group
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: system
spec:
  # parameters for provisioning the cloud-based VMs
  cloudInstances:
    # the reference to the InstanceClass object
    classReference:
      kind: GCPInstanceClass
      name: system
    # the minimum number of instances for the group in each zone
    minPerZone: 2
    # the maximum number of instances for the group in each zone
    maxPerZone: 2
    # list of availability zones to create instances in
    # you might consider changing this
    zones:
    - europe-west3-a
  disruptions:
    approvalMode: Automatic
  nodeTemplate:
    # similar to the standard metadata.labels field
    labels:
      node-role.deckhouse.io/system: ""
    # similar to the .spec.taints field of the Node object
    # only effect, key, value fields are available
    taints:
    - effect: NoExecute
      key: dedicated.deckhouse.io
      value: system
  nodeType: CloudEphemeral
---
# section containing the parameters of instance class for frontend nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: GCPInstanceClass
metadata:
  # name of instance class
  name: frontend
spec:
  diskSizeGb: 40
  # Machine type in use for this instance class
  # you might consider changing this
  machineType: n2-standard-4
---
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: frontend
spec:
  cloudInstances:
    classReference:
      kind: GCPInstanceClass
      name: frontend
    # the minimum number of instances for the group in each zone
    minPerZone: 2
    # the maximum number of instances for the group in each zone
    maxPerZone: 3
    # you might consider changing this
    zones:
    - europe-west3-a
  disruptions:
    approvalMode: Automatic
  nodeTemplate:
    # similar to the standard metadata.labels field
    labels:
      node-role.deckhouse.io/frontend: ""
  nodeType: CloudEphemeral
---
# section containing the parameters of instance class for worker nodes
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: GCPInstanceClass
metadata:
  # name of instance class
  name: worker
spec:
  diskSizeGb: 40
  # Machine type in use for this instance class
  # you might consider changing this
  machineType: n2-standard-4
---
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
  name: worker
spec:
  # parameters for provisioning the cloud-based VMs
  cloudInstances:
    classReference:
      kind: GCPInstanceClass
      name: worker
    # the minimum number of instances for the group in each zone
    minPerZone: 1
    # the maximum number of instances for the group in each zone
    maxPerZone: 1
    # you might consider changing this
    zones:
    - europe-west3-a
  disruptions:
    approvalMode: Automatic
  nodeTemplate:
    # similar to the standard metadata.labels field
    labels:
      node-role.deckhouse.io/worker: ""
  nodeType: CloudEphemeral
---
# section containing the parameters of nginx ingress controller
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
  name: nginx
spec:
  # the name of the Ingress class to use with the Ingress nginx controller
  ingressClass: nginx
  # the way traffic goes to cluster from the outer network
  inlet: LoadBalancer
  # describes on which nodes the component will be located
  nodeSelector:
    node-role.kubernetes.io/frontend: ''
---
apiVersion: deckhouse.io/v1
kind: ClusterAuthorizationRule
metadata:
  name: admin
spec:
  # Kubernetes RBAC accounts list
  subjects:
  - kind: User
    name: admin@example.com
  # pre-defined access template
  accessLevel: SuperAdmin
  # allow user to do kubectl port-forward
  portForwarding: true
---
# section containing the parameters of the static user
# version of the Deckhouse API
apiVersion: deckhouse.io/v1
kind: User
metadata:
  name: admin
spec:
  # user e-mail
  email: admin@example.com
  # this is a hash for generated password: <GENERATED_PASSWORD>
  # you might consider changing this
  password: <GENERATED_PASSWORD_HASH>
# section containing the parameters of instance class for system nodes # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: GCPInstanceClass metadata: # name of instance class name: system spec: diskSizeGb: 40 # Machine type in use for this instance class # you might consider changing this machineType: n2-standard-4 --- # section containing the parameters of system node group # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: system spec: # parameters for provisioning the cloud-based VMs cloudInstances: # the reference to the InstanceClass object classReference: kind: GCPInstanceClass name: system # the minimum number of instances for the group in each zone minPerZone: 2 # the maximum number of instances for the group in each zone maxPerZone: 2 # list of availability zones to create instances in # you might consider changing this zones: - europe-west3-a disruptions: approvalMode: Automatic nodeTemplate: # similar to the standard metadata.labels field labels: node-role.deckhouse.io/system: "" # similar to the .spec.taints field of the Node object # only effect, key, value fields are available taints: - effect: NoExecute key: dedicated.deckhouse.io value: system nodeType: CloudEphemeral --- # section containing the parameters of instance class for frontend nodes # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: GCPInstanceClass metadata: # name of instance class name: frontend spec: diskSizeGb: 40 # Machine type in use for this instance class # you might consider changing this machineType: n2-standard-4 --- apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: frontend spec: cloudInstances: classReference: kind: GCPInstanceClass name: frontend # the minimum number of instances for the group in each zone minPerZone: 2 # the maximum number of instances for the group in each zone maxPerZone: 3 # you might consider changing this zones: - europe-west3-a disruptions: approvalMode: Automatic nodeTemplate: # similar to the standard metadata.labels field labels: node-role.deckhouse.io/frontend: "" nodeType: CloudEphemeral --- # section containing the parameters of instance class for worker nodes # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: GCPInstanceClass metadata: # name of instance class name: worker spec: diskSizeGb: 40 # Machine type in use for this instance class # you might consider changing this machineType: n2-standard-4 --- apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: # parameters for provisioning the cloud-based VMs cloudInstances: classReference: kind: GCPInstanceClass name: worker # the minimum number of instances for the group in each zone minPerZone: 1 # the maximum number of instances for the group in each zone maxPerZone: 1 # you might consider changing this zones: - europe-west3-a disruptions: approvalMode: Automatic nodeTemplate: # similar to the standard metadata.labels field labels: node-role.deckhouse.io/worker: "" nodeType: CloudEphemeral --- # section containing the parameters of nginx ingress controller # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: # the name of the Ingress class to use with the Ingress nginx controller ingressClass: nginx # the way traffic goes to cluster from the outer network inlet: LoadBalancer # describes on which nodes the component will be located nodeSelector: node-role.kubernetes.io/frontend: '' --- apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: # Kubernetes RBAC accounts list subjects: - kind: User name: admin@example.com # pre-defined access template accessLevel: SuperAdmin # allow user to do kubectl port-forward portForwarding: true --- # section containing the parameters of the static user # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: # user e-mail email: admin@example.com # this is a hash for generated password: <GENERATED_PASSWORD> # you might consider changing this password: <GENERATED_PASSWORD_HASH>

To install the Deckhouse Platform, we will use a prebuilt Docker image. It is necessary to transfer configuration files to the container, as well as ssh-keys for access to the master nodes:

docker run -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \
 -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl"  registry.deckhouse.io/deckhouse/ce/install:stable bash
docker run -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \ -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ce/install:stable bash

Now, to initiate the process of installation, you need to execute inside the container:

dhctl bootstrap \
  --ssh-user=<username> \
  --ssh-agent-private-keys=/tmp/.ssh/id_rsa \
  --config=/config.yml \
  --resources=/resources.yml
dhctl bootstrap \ --ssh-user=<username> \ --ssh-agent-private-keys=/tmp/.ssh/id_rsa \ --config=/config.yml \ --resources=/resources.yml

username variable here refers to user (for the images suggested in this documentation). Notes:

  • The -v "$PWD/dhctl-tmp:/tmp/dhctl" parameter enables saving the state of the Terraform installer to a temporary directory on the startup host. It allows the installation to continue correctly in case of a failure of the installer’s container.

  • If any problems occur, you can cancel the process of installation and remove all created objects using the following command (the configuration file should be the same you’ve used to initiate the installation):

    dhctl bootstrap-phase abort --config=/config.yml
    
    dhctl bootstrap-phase abort --config=/config.yml

After the installation is complete, you will be returned to the command line.

Almost everything is ready for a fully-fledged Deckhouse Platform to work!

To install the Deckhouse Platform, we will use a prebuilt Docker image. It is necessary to transfer configuration files to the container, as well as ssh-keys for access to the master nodes:

 echo <LICENSE_TOKEN> | docker login -u license-token --password-stdin registry.deckhouse.io
docker run -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \
 -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl"  registry.deckhouse.io/deckhouse/ee/install:stable bash
echo <LICENSE_TOKEN> | docker login -u license-token --password-stdin registry.deckhouse.io docker run -it -v "$PWD/config.yml:/config.yml" -v "$HOME/.ssh/:/tmp/.ssh/" \ -v "$PWD/resources.yml:/resources.yml" -v "$PWD/dhctl-tmp:/tmp/dhctl" registry.deckhouse.io/deckhouse/ee/install:stable bash

Now, to initiate the process of installation, you need to execute inside the container:

dhctl bootstrap \
  --ssh-user=<username> \
  --ssh-agent-private-keys=/tmp/.ssh/id_rsa \
  --config=/config.yml \
  --resources=/resources.yml
dhctl bootstrap \ --ssh-user=<username> \ --ssh-agent-private-keys=/tmp/.ssh/id_rsa \ --config=/config.yml \ --resources=/resources.yml

username variable here refers to user (for the images suggested in this documentation). Notes:

  • The -v "$PWD/dhctl-tmp:/tmp/dhctl" parameter enables saving the state of the Terraform installer to a temporary directory on the startup host. It allows the installation to continue correctly in case of a failure of the installer’s container.

  • If any problems occur, you can cancel the process of installation and remove all created objects using the following command (the configuration file should be the same you’ve used to initiate the installation):

    dhctl bootstrap-phase abort --config=/config.yml
    
    dhctl bootstrap-phase abort --config=/config.yml

After the installation is complete, you will be returned to the command line.

Almost everything is ready for a fully-fledged Deckhouse Platform to work!