Deckhouse Kubernetes Platform for bare metal
At this point, you have created a cluster consisting of a single node — the master node. By default, only a limited set of system components runs on the master node. To ensure the full functionality of the cluster, you need to either add at least one worker node to the cluster or allow the remaining Deckhouse components to run on the master node.
Select one of the two options below to continue installing the cluster:
Add a new node to the cluster (for more information about adding a static node to a cluster, read the documentation):
- Start a new virtual machine that will become the cluster node.
-
Configure the StorageClass for the local storage by running the following command on the master node:
sudo -i d8 k create -f - << EOF apiVersion: deckhouse.io/v1alpha1 kind: LocalPathProvisioner metadata: name: localpath spec: path: "/opt/local-path-provisioner" reclaimPolicy: Delete EOF
sudo -i d8 k create -f - << EOF apiVersion: deckhouse.io/v1alpha1 kind: LocalPathProvisioner metadata: name: localpath spec: path: "/opt/local-path-provisioner" reclaimPolicy: Delete EOF -
Make the created StorageClass as the default one in the cluster:
sudo -i d8 k patch mc global --type merge \ -p "{\"spec\": {\"settings\":{\"defaultClusterStorageClass\":\"localpath\"}}}"
sudo -i d8 k patch mc global --type merge \ -p "{\"spec\": {\"settings\":{\"defaultClusterStorageClass\":\"localpath\"}}}" -
Create a NodeGroup
worker
. To do so, run the following command on the master node:sudo -i d8 k create -f - << EOF apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: nodeType: Static staticInstances: count: 1 labelSelector: matchLabels: role: worker EOF
sudo -i d8 k create -f - << EOF apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: nodeType: Static staticInstances: count: 1 labelSelector: matchLabels: role: worker EOF -
Generate a new SSH key with an empty passphrase. To do so, run the following command on the master node:
ssh-keygen -t rsa -f /dev/shm/caps-id -C "" -N ""
ssh-keygen -t rsa -f /dev/shm/caps-id -C "" -N "" -
Create an SSHCredentials resource in the cluster. To do so, run the following command on the master node:
sudo -i d8 k -f - <<EOF apiVersion: deckhouse.io/v1alpha1 kind: SSHCredentials metadata: name: caps spec: user: caps privateSSHKey: "`cat /dev/shm/caps-id | base64 -w0`" EOF
sudo -i d8 k -f - <<EOF apiVersion: deckhouse.io/v1alpha1 kind: SSHCredentials metadata: name: caps spec: user: caps privateSSHKey: "`cat /dev/shm/caps-id | base64 -w0`" EOF -
Print the public part of the previously generated SSH key (you will need it in the next step). To do so, run the following command on the master node:
-
Create the
caps
user on the virtual machine you have started. To do so, run the following command, specifying the public part of the SSH key obtained in the previous step:# Specify the public part of the user SSH key. export KEY='<SSH-PUBLIC-KEY>' useradd -m -s /bin/bash caps usermod -aG sudo caps echo 'caps ALL=(ALL) NOPASSWD: ALL' | sudo EDITOR='tee -a' visudo mkdir /home/caps/.ssh echo $KEY >> /home/caps/.ssh/authorized_keys chown -R caps:caps /home/caps chmod 700 /home/caps/.ssh chmod 600 /home/caps/.ssh/authorized_keys
# Specify the public part of the user SSH key. export KEY='<SSH-PUBLIC-KEY>' useradd -m -s /bin/bash caps usermod -aG sudo caps echo 'caps ALL=(ALL) NOPASSWD: ALL' | sudo EDITOR='tee -a' visudo mkdir /home/caps/.ssh echo $KEY >> /home/caps/.ssh/authorized_keys chown -R caps:caps /home/caps chmod 700 /home/caps/.ssh chmod 600 /home/caps/.ssh/authorized_keys -
Create a StaticInstance for the node to be added. To do so, run the following command on the master node (specify IP address of the node):
# Specify the IP address of the node you want to connect to the cluster. export NODE=<NODE-IP-ADDRESS> sudo -i d8 k -f - <<EOF apiVersion: deckhouse.io/v1alpha1 kind: StaticInstance metadata: name: d8cluster-worker labels: role: worker spec: address: "$NODE" credentialsRef: kind: SSHCredentials name: caps EOF
# Specify the IP address of the node you want to connect to the cluster. export NODE=<NODE-IP-ADDRESS> sudo -i d8 k -f - <<EOF apiVersion: deckhouse.io/v1alpha1 kind: StaticInstance metadata: name: d8cluster-worker labels: role: worker spec: address: "$NODE" credentialsRef: kind: SSHCredentials name: caps EOF If you have added additional nodes to the cluster, ensure they are
Ready
.On the master node, run the following command to get nodes list:
Note that it may take some time to get all Deckhouse components up and running after the installation is complete.
Make sure the Kruise controller manager is
Ready
before continuing.On the master node, run the following command:
sudo -i d8 k -n d8-ingress-nginx get po -l app=kruise
sudo -i d8 k -n d8-ingress-nginx get po -l app=kruise
Next, you will need to create an Ingress controller, a user to access the web interfaces, and configure the DNS.
Setting up an Ingress controller
On the master node, create the
ingress-nginx-controller.yml
file containing the Ingress controller configuration:# Section containing the parameters of NGINX Ingress controller. # https://deckhouse.io/products/kubernetes-platform/documentation/v1/modules/402-ingress-nginx/cr.html apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: ingressClass: nginx # The way traffic goes to cluster from the outer network. inlet: HostPort hostPort: httpPort: 80 httpsPort: 443 # Describes on which nodes the Ingress Controller will be located. # You might consider changing this. # More examples here. # https://deckhouse.io/products/kubernetes-platform/documentation/v1/modules/402-ingress-nginx/examples.html nodeSelector: node-role.kubernetes.io/control-plane: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/control-plane operator: Exists
# Section containing the parameters of NGINX Ingress controller. # https://deckhouse.io/products/kubernetes-platform/documentation/v1/modules/402-ingress-nginx/cr.html apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: ingressClass: nginx # The way traffic goes to cluster from the outer network. inlet: HostPort hostPort: httpPort: 80 httpsPort: 443 # Describes on which nodes the Ingress Controller will be located. # You might consider changing this. # More examples here. # https://deckhouse.io/products/kubernetes-platform/documentation/v1/modules/402-ingress-nginx/examples.html nodeSelector: node-role.kubernetes.io/control-plane: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/control-plane operator: ExistsApply it using the following command on the master node:
It may take some time to start the Ingress controller after installing Deckhouse. Make sure the Ingress controller has started before continuing (run on thesudo -i d8 k create -f ingress-nginx-controller.yml
sudo -i d8 k create -f ingress-nginx-controller.ymlmaster
node):Wait for the Ingress controller pods to switch tosudo -i d8 k -n d8-ingress-nginx get po -l app=controller
sudo -i d8 k -n d8-ingress-nginx get po -l app=controllerReady
state.Create a user to access the cluster web interfaces
Create on the master node the
user.yml
file containing the user account data and access rights:apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: # Kubernetes RBAC accounts list subjects: - kind: User name: admin@deckhouse.io # pre-defined access template accessLevel: SuperAdmin # allow user to do kubectl port-forward portForwarding: true --- # section containing the parameters of the static user # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: # user e-mail email: admin@deckhouse.io # this is a hash of the password <GENERATED_PASSWORD>, generated now # generate your own or use it at your own risk (for testing purposes) # echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2 | base64 -w0 # you might consider changing this password: <GENERATED_PASSWORD_HASH>
apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: # Kubernetes RBAC accounts list subjects: - kind: User name: admin@deckhouse.io # pre-defined access template accessLevel: SuperAdmin # allow user to do kubectl port-forward portForwarding: true --- # section containing the parameters of the static user # version of the Deckhouse API apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: # user e-mail email: admin@deckhouse.io # this is a hash of the password <GENERATED_PASSWORD>, generated now # generate your own or use it at your own risk (for testing purposes) # echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2 | base64 -w0 # you might consider changing this password: <GENERATED_PASSWORD_HASH>Apply it using the following command on the master node:
- Create DNS records to organize access to the cluster web-interfaces:
- Discover public IP address of the node where the Ingress controller is running.
- If you have the DNS server and you can add a DNS records:
- If your cluster DNS name template is a wildcard DNS (e.g.,
%s.kube.my
), then add a corresponding wildcard A record containing the public IP, you've discovered previously. - If your cluster DNS name template is NOT a wildcard DNS (e.g.,
%s-kube.company.my
), then add A or CNAME records containing the public IP, you've discovered previously, for the following Deckhouse service DNS names:api.example.com argocd.example.com dashboard.example.com documentation.example.com dex.example.com grafana.example.com hubble.example.com istio.example.com istio-api-proxy.example.com kubeconfig.example.com openvpn-admin.example.com prometheus.example.com status.example.com upmeter.example.com
- Important: The domain used in the template should not match the domain specified in the clusterDomain parameter and the internal service network zone. For example, if clusterDomain is set to
cluster.local
(the default value) and the service network zone isru-central1.internal
, then publicDomainTemplate cannot be%s.cluster.local
or%s.ru-central1.internal
.
- If your cluster DNS name template is a wildcard DNS (e.g.,
If you don't have a DNS server: on your PC add static entries (specify your public IP address in the
PUBLIC_IP
variable) that match the names of specific services to the public IP to the/etc/hosts
file for Linux (%SystemRoot%\system32\drivers\etc\hosts
for Windows):export PUBLIC_IP="<PUT_PUBLIC_IP_HERE>" sudo -E bash -c "cat <<EOF >> /etc/hosts $PUBLIC_IP api.example.com $PUBLIC_IP argocd.example.com $PUBLIC_IP dashboard.example.com $PUBLIC_IP documentation.example.com $PUBLIC_IP dex.example.com $PUBLIC_IP grafana.example.com $PUBLIC_IP hubble.example.com $PUBLIC_IP istio.example.com $PUBLIC_IP istio-api-proxy.example.com $PUBLIC_IP kubeconfig.example.com $PUBLIC_IP openvpn-admin.example.com $PUBLIC_IP prometheus.example.com $PUBLIC_IP status.example.com $PUBLIC_IP upmeter.example.com EOF "
export PUBLIC_IP="<PUT_PUBLIC_IP_HERE>" sudo -E bash -c "cat <<EOF >> /etc/hosts $PUBLIC_IP api.example.com $PUBLIC_IP argocd.example.com $PUBLIC_IP dashboard.example.com $PUBLIC_IP documentation.example.com $PUBLIC_IP dex.example.com $PUBLIC_IP grafana.example.com $PUBLIC_IP hubble.example.com $PUBLIC_IP istio.example.com $PUBLIC_IP istio-api-proxy.example.com $PUBLIC_IP kubeconfig.example.com $PUBLIC_IP openvpn-admin.example.com $PUBLIC_IP prometheus.example.com $PUBLIC_IP status.example.com $PUBLIC_IP upmeter.example.com EOF "