Deckhouse Kubernetes Platform for bare metal

At this point, you have created a cluster that consists of a single master node. Only a limited set of system components run on the master node by default. You have to either add at least one worker node to the cluster for the cluster to work properly, or allow the rest of the Deckhouse components to work on the master node.

Select one of the two options below to continue installing the cluster:

Add a new node to the cluster (for more information about adding a static node to a cluster, read the documentation):

  • Start a new virtual machine that will become the cluster node.
  • Configure the StorageClass for the local storage by running the following command on the master node:
    sudo /opt/deckhouse/bin/kubectl create -f - << EOF
    apiVersion: deckhouse.io/v1alpha1
    kind: LocalPathProvisioner
    metadata:
      name: localpath-deckhouse
    spec:
      nodeGroups:
      - worker
      path: "/opt/local-path-provisioner"
    EOF
    
    sudo /opt/deckhouse/bin/kubectl create -f - << EOF apiVersion: deckhouse.io/v1alpha1 kind: LocalPathProvisioner metadata: name: localpath-deckhouse spec: nodeGroups: - worker path: "/opt/local-path-provisioner" EOF
  • Make the created StorageClass as the default one by adding the storageclass.kubernetes.io/is-default-class='true' annotation:

    sudo /opt/deckhouse/bin/kubectl annotate sc localpath-deckhouse storageclass.kubernetes.io/is-default-class='true'
    
    sudo /opt/deckhouse/bin/kubectl annotate sc localpath-deckhouse storageclass.kubernetes.io/is-default-class='true'
  • Create a NodeGroup worker. To do so, run the following command on the master node:

    sudo /opt/deckhouse/bin/kubectl create -f - << EOF
    apiVersion: deckhouse.io/v1
    kind: NodeGroup
    metadata:
      name: worker
    spec:
      nodeType: Static
    EOF
    
    sudo /opt/deckhouse/bin/kubectl create -f - << EOF apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: nodeType: Static EOF
  • Deckhouse will generate the script needed to configure the prospective node and include it in the cluster. Print its contents in Base64 format (you will need them at the next step):

    sudo /opt/deckhouse/bin/kubectl -n d8-cloud-instance-manager get secret manual-bootstrap-for-worker -o json | jq '.data."bootstrap.sh"' -r
    
    sudo /opt/deckhouse/bin/kubectl -n d8-cloud-instance-manager get secret manual-bootstrap-for-worker -o json | jq '.data."bootstrap.sh"' -r
  • On the virtual machine you have started, run the following command by pasting the script code from the previous step:

    echo <Base64-SCRIPT-CODE> | base64 -d | sudo bash
    
    echo <Base64-SCRIPT-CODE> | base64 -d | sudo bash
  • If you have added additional nodes to the cluster, ensure they are Ready.

    On the master node, run the following command to get nodes list:

    sudo /opt/deckhouse/bin/kubectl get no
    
    sudo /opt/deckhouse/bin/kubectl get no

    Example of the output...

    $ sudo /opt/deckhouse/bin/kubectl get no
    NAME               STATUS   ROLES                  AGE    VERSION
    d8cluster          Ready    control-plane,master   30m   v1.23.17
    d8cluster-worker   Ready    worker                 10m   v1.23.17
    

Note that it may take some time to get all Deckhouse components up and running after the installation is complete.

  • Make sure the Kruise controller manager is Ready before continuing.

    On the master node, run the following command:

    sudo /opt/deckhouse/bin/kubectl -n d8-ingress-nginx get po -l app=kruise
    
    sudo /opt/deckhouse/bin/kubectl -n d8-ingress-nginx get po -l app=kruise

    Example of the output...

    $ sudo /opt/deckhouse/bin/kubectl -n d8-ingress-nginx get po -l app=kruise
    NAME                                         READY   STATUS    RESTARTS    AGE
    kruise-controller-manager-7dfcbdc549-b4wk7   3/3     Running   0           15m
    

Next, you will need to create an Ingress controller, a user to access the web interfaces, and configure the DNS.

  • Setting up an Ingress controller

    On the master node, create the ingress-nginx-controller.yml file containing the Ingress controller configuration:

    ingress-nginx-controller.ymlCopy filenameCopy content
    # Section containing the parameters of NGINX Ingress controller.
    # https://deckhouse.io/documentation/v1/modules/402-ingress-nginx/cr.html
    apiVersion: deckhouse.io/v1
    kind: IngressNginxController
    metadata:
      name: nginx
    spec:
      ingressClass: nginx
      # The way traffic goes to cluster from the outer network.
      inlet: HostPort
      hostPort:
        httpPort: 80
        httpsPort: 443
      # Describes on which nodes the Ingress Controller will be located.
      # You might consider changing this.
      nodeSelector:
        node-role.kubernetes.io/control-plane: ""
      tolerations:
      - operator: Exists
    
    # Section containing the parameters of NGINX Ingress controller. # https://deckhouse.io/documentation/v1/modules/402-ingress-nginx/cr.html apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: ingressClass: nginx # The way traffic goes to cluster from the outer network. inlet: HostPort hostPort: httpPort: 80 httpsPort: 443 # Describes on which nodes the Ingress Controller will be located. # You might consider changing this. nodeSelector: node-role.kubernetes.io/control-plane: "" tolerations: - operator: Exists

    Apply it using the following command on the master node:

    sudo /opt/deckhouse/bin/kubectl create -f ingress-nginx-controller.yml
    
    sudo /opt/deckhouse/bin/kubectl create -f ingress-nginx-controller.yml
    It may take some time to start the Ingress controller after installing Deckhouse. Make sure the Ingress controller has started before continuing (run on the master node):
    sudo /opt/deckhouse/bin/kubectl -n d8-ingress-nginx get po -l app=controller
    
    sudo /opt/deckhouse/bin/kubectl -n d8-ingress-nginx get po -l app=controller
    Wait for the Ingress controller pods to switch to Ready state.

    Example of the output...

    $ sudo /opt/deckhouse/bin/kubectl -n d8-ingress-nginx get po -l app=controller
    NAME                                       READY   STATUS    RESTARTS   AGE
    controller-nginx-r6hxc                     3/3     Running   0          5m
    
  • Create a user to access the cluster web interfaces

    Create on the master node the user.yml file containing the user account data and access rights:

    apiVersion: deckhouse.io/v1
    kind: ClusterAuthorizationRule
    metadata:
      name: admin
    spec:
      # Kubernetes RBAC accounts list
      subjects:
      - kind: User
        name: admin@deckhouse.io
      # pre-defined access template
      accessLevel: SuperAdmin
      # allow user to do kubectl port-forward
      portForwarding: true
    ---
    # section containing the parameters of the static user
    # version of the Deckhouse API
    apiVersion: deckhouse.io/v1
    kind: User
    metadata:
      name: admin
    spec:
      # user e-mail
      email: admin@deckhouse.io
      # this is a hash of the password <GENERATED_PASSWORD>, generated  now
      # generate your own or use it at your own risk (for testing purposes)
      # echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2 | base64 -w0
      # you might consider changing this
      password: <GENERATED_PASSWORD_HASH>
    
    apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: # Kubernetes RBAC accounts list subjects: - kind: User name: admin@deckhouse.io # pre-defined access template accessLevel: SuperAdmin # allow user to do kubectl port-forward portForwarding: true --- # section containing the parameters of the static user # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: # user e-mail email: admin@deckhouse.io # this is a hash of the password <GENERATED_PASSWORD>, generated now # generate your own or use it at your own risk (for testing purposes) # echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2 | base64 -w0 # you might consider changing this password: <GENERATED_PASSWORD_HASH>

    Apply it using the following command on the master node:

    sudo /opt/deckhouse/bin/kubectl create -f user.yml
    
    sudo /opt/deckhouse/bin/kubectl create -f user.yml
  • Create DNS records to organize access to the cluster web-interfaces:
    • Discover public IP address of the node where the Ingress controller is running.
    • If you have the DNS server and you can add a DNS records:
      • If your cluster DNS name template is a wildcard DNS (e.g., %s.kube.my), then add a corresponding wildcard A record containing the public IP, you've discovered previously.
      • If your cluster DNS name template is NOT a wildcard DNS (e.g., %s-kube.company.my), then add A or CNAME records containing the public IP, you've discovered previously, for the following Deckhouse service DNS names:
        api.example.com
        argocd.example.com
        cdi-uploadproxy.example.com
        dashboard.example.com
        documentation.example.com
        dex.example.com
        grafana.example.com
        hubble.example.com
        istio.example.com
        istio-api-proxy.example.com
        kubeconfig.example.com
        openvpn-admin.example.com
        prometheus.example.com
        status.example.com
        upmeter.example.com
        
    • If you don't have a DNS server: on your PC add static entries (specify your public IP address in the PUBLIC_IPvariable) that match the names of specific services to the public IP to the /etc/hosts file for Linux (%SystemRoot%\system32\drivers\etc\hosts for Windows):

      export PUBLIC_IP="<PUT_PUBLIC_IP_HERE>"
      sudo -E bash -c "cat <<EOF >> /etc/hosts
      $PUBLIC_IP api.example.com
      $PUBLIC_IP argocd.example.com
      $PUBLIC_IP cdi-uploadproxy.example.com
      $PUBLIC_IP dashboard.example.com
      $PUBLIC_IP documentation.example.com
      $PUBLIC_IP dex.example.com
      $PUBLIC_IP grafana.example.com
      $PUBLIC_IP hubble.example.com
      $PUBLIC_IP istio.example.com
      $PUBLIC_IP istio-api-proxy.example.com
      $PUBLIC_IP kubeconfig.example.com
      $PUBLIC_IP openvpn-admin.example.com
      $PUBLIC_IP prometheus.example.com
      $PUBLIC_IP status.example.com
      $PUBLIC_IP upmeter.example.com
      EOF
      "
      
      export PUBLIC_IP="<PUT_PUBLIC_IP_HERE>" sudo -E bash -c "cat <<EOF >> /etc/hosts $PUBLIC_IP api.example.com $PUBLIC_IP argocd.example.com $PUBLIC_IP cdi-uploadproxy.example.com $PUBLIC_IP dashboard.example.com $PUBLIC_IP documentation.example.com $PUBLIC_IP dex.example.com $PUBLIC_IP grafana.example.com $PUBLIC_IP hubble.example.com $PUBLIC_IP istio.example.com $PUBLIC_IP istio-api-proxy.example.com $PUBLIC_IP kubeconfig.example.com $PUBLIC_IP openvpn-admin.example.com $PUBLIC_IP prometheus.example.com $PUBLIC_IP status.example.com $PUBLIC_IP upmeter.example.com EOF "