Deckhouse Platform in a private environment
At this point, you have created a cluster that consists of a single master node. Since only a limited set of system components run on the master node by default, you have to add at least one worker node to the cluster for the cluster to work properly.
Add a new node to the cluster:
- Start a new virtual machine that will become the cluster node.
-
Create a NodeGroup
worker
. To do so, run the following command on the master node:sudo /opt/deckhouse/bin/kubectl create -f - << EOF apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: nodeType: Static EOF
sudo /opt/deckhouse/bin/kubectl create -f - << EOF apiVersion: deckhouse.io/v1 kind: NodeGroup metadata: name: worker spec: nodeType: Static EOF -
Deckhouse will generate the script needed to configure the prospective node and include it in the cluster. Print its contents in Base64 format (you will need them at the next step):
sudo /opt/deckhouse/bin/kubectl -n d8-cloud-instance-manager get secret manual-bootstrap-for-worker -o json | jq '.data."bootstrap.sh"' -r
sudo /opt/deckhouse/bin/kubectl -n d8-cloud-instance-manager get secret manual-bootstrap-for-worker -o json | jq '.data."bootstrap.sh"' -r -
On the virtual machine you have started, run the following command by pasting the script code from the previous step:
echo <Base64-SCRIPT-CODE> | base64 -d | sudo bash
echo <Base64-SCRIPT-CODE> | base64 -d | sudo bash
Note that it may take some time to get all Deckhouse components up and running after the installation is complete.
Before you go further:
If you have added additional nodes to the cluster, ensure they are
Ready
.On the master node, run the following command to get nodes list:
Make sure the Kruise controller manager is
Ready
before continuing.On the master node, run the following command:
sudo /opt/deckhouse/bin/kubectl -n d8-ingress-nginx get po -l app=kruise
sudo /opt/deckhouse/bin/kubectl -n d8-ingress-nginx get po -l app=kruise
Next, you will need to create an Ingress controller, a Storage Class for data storage, a user to access the web interfaces, and configure the DNS.
Setting up an Ingress controller
On the master node, create the
ingress-nginx-controller.yml
file containing the Ingress controller configuration:# Section containing the parameters of NGINX Ingress controller. # https://deckhouse.io/documentation/v1/modules/402-ingress-nginx/cr.html apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: # The name of the Ingress class to use with the NGINX Ingress controller. ingressClass: nginx # The way traffic goes to cluster from the outer network. inlet: HostPort hostPort: httpPort: 80 httpsPort: 443 # Describes on which nodes the component will be located. # You might consider changing this. nodeSelector: node-role.kubernetes.io/master: "" tolerations: - operator: Exists
# Section containing the parameters of NGINX Ingress controller. # https://deckhouse.io/documentation/v1/modules/402-ingress-nginx/cr.html apiVersion: deckhouse.io/v1 kind: IngressNginxController metadata: name: nginx spec: # The name of the Ingress class to use with the NGINX Ingress controller. ingressClass: nginx # The way traffic goes to cluster from the outer network. inlet: HostPort hostPort: httpPort: 80 httpsPort: 443 # Describes on which nodes the component will be located. # You might consider changing this. nodeSelector: node-role.kubernetes.io/master: "" tolerations: - operator: ExistsApply it using the following command on the master node:
It may take some time to start the Ingress controller after installing Deckhouse. Make sure the Ingress controller has started before continuing (run on thesudo /opt/deckhouse/bin/kubectl create -f ingress-nginx-controller.yml
sudo /opt/deckhouse/bin/kubectl create -f ingress-nginx-controller.ymlmaster
node):Wait for the Ingress controller pods to switch tosudo /opt/deckhouse/bin/kubectl -n d8-ingress-nginx get po -l app=controller
sudo /opt/deckhouse/bin/kubectl -n d8-ingress-nginx get po -l app=controllerReady
state.Creating a StorageClass
Configure the StorageClass for the local storage by running the following command on the master node:
sudo /opt/deckhouse/bin/kubectl create -f - << EOF apiVersion: deckhouse.io/v1alpha1 kind: LocalPathProvisioner metadata: name: localpath-deckhouse-system spec: nodeGroups: - worker path: "/opt/local-path-provisioner" EOF
sudo /opt/deckhouse/bin/kubectl create -f - << EOF apiVersion: deckhouse.io/v1alpha1 kind: LocalPathProvisioner metadata: name: localpath-deckhouse-system spec: nodeGroups: - worker path: "/opt/local-path-provisioner" EOFCreate a user to access the cluster web interfaces
Create on the master node the
user.yml
file containing the user account data and access rights:# RBAC and authorization settings. # https://deckhouse.io/documentation/v1/modules/140-user-authz/cr.html#clusterauthorizationrule apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: # Kubernetes RBAC accounts list. subjects: - kind: User name: admin@deckhouse.io # Pre-defined access template. accessLevel: SuperAdmin # Allow user to do kubectl port-forward. portForwarding: true --- # Parameters of the static user. # https://deckhouse.io/documentation/v1/modules/150-user-authn/cr.html#user apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: # User e-mail. email: admin@deckhouse.io # This is a hash of the newly generated <GENERATED_PASSWORD> password. # Generate your own or use it at your own risk (for testing purposes): # echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2 | base64 -w0 # You might consider changing this. password: <GENERATED_PASSWORD_HASH>
# RBAC and authorization settings. # https://deckhouse.io/documentation/v1/modules/140-user-authz/cr.html#clusterauthorizationrule apiVersion: deckhouse.io/v1 kind: ClusterAuthorizationRule metadata: name: admin spec: # Kubernetes RBAC accounts list. subjects: - kind: User name: admin@deckhouse.io # Pre-defined access template. accessLevel: SuperAdmin # Allow user to do kubectl port-forward. portForwarding: true --- # Parameters of the static user. # https://deckhouse.io/documentation/v1/modules/150-user-authn/cr.html#user apiVersion: deckhouse.io/v1 kind: User metadata: name: admin spec: # User e-mail. email: admin@deckhouse.io # This is a hash of the newly generated <GENERATED_PASSWORD> password. # Generate your own or use it at your own risk (for testing purposes): # echo "<GENERATED_PASSWORD>" | htpasswd -BinC 10 "" | cut -d: -f2 | base64 -w0 # You might consider changing this. password: <GENERATED_PASSWORD_HASH>Apply it using the following command on the master node:
sudo /opt/deckhouse/bin/kubectl create -f user.yml
sudo /opt/deckhouse/bin/kubectl create -f user.yml- Create DNS records to organize access to the cluster web-interfaces:
- Discover public IP address of the node where the Ingress controller is running.
- If you have the DNS server and you can add a DNS records:
- If your cluster DNS name template is a wildcard DNS (e.g.,
%s.kube.my
), then add a corresponding wildcard A record containing the public IP, you've discovered previously. - If your cluster DNS name template is NOT a wildcard DNS (e.g.,
%s-kube.company.my
), then add A or CNAME records containing the public IP, you've discovered previously, for the following Deckhouse service DNS names:api.example.com argocd.example.com cdi-uploadproxy.example.com dashboard.example.com documentation.example.com dex.example.com grafana.example.com hubble.example.com istio.example.com istio-api-proxy.example.com kubeconfig.example.com openvpn-admin.example.com prometheus.example.com status.example.com upmeter.example.com
- If your cluster DNS name template is a wildcard DNS (e.g.,
If you don't have a DNS server: on your PC add static entries (specify your public IP address in the
PUBLIC_IP
variable) that match the names of specific services to the public IP to the/etc/hosts
file for Linux (%SystemRoot%\system32\drivers\etc\hosts
for Windows):export PUBLIC_IP="<PUT_PUBLIC_IP_HERE>" sudo -E bash -c "cat <<EOF >> /etc/hosts $PUBLIC_IP api.example.com $PUBLIC_IP argocd.example.com $PUBLIC_IP cdi-uploadproxy.example.com $PUBLIC_IP dashboard.example.com $PUBLIC_IP documentation.example.com $PUBLIC_IP dex.example.com $PUBLIC_IP grafana.example.com $PUBLIC_IP hubble.example.com $PUBLIC_IP istio.example.com $PUBLIC_IP istio-api-proxy.example.com $PUBLIC_IP kubeconfig.example.com $PUBLIC_IP openvpn-admin.example.com $PUBLIC_IP prometheus.example.com $PUBLIC_IP status.example.com $PUBLIC_IP upmeter.example.com EOF "
export PUBLIC_IP="<PUT_PUBLIC_IP_HERE>" sudo -E bash -c "cat <<EOF >> /etc/hosts $PUBLIC_IP api.example.com $PUBLIC_IP argocd.example.com $PUBLIC_IP cdi-uploadproxy.example.com $PUBLIC_IP dashboard.example.com $PUBLIC_IP documentation.example.com $PUBLIC_IP dex.example.com $PUBLIC_IP grafana.example.com $PUBLIC_IP hubble.example.com $PUBLIC_IP istio.example.com $PUBLIC_IP istio-api-proxy.example.com $PUBLIC_IP kubeconfig.example.com $PUBLIC_IP openvpn-admin.example.com $PUBLIC_IP prometheus.example.com $PUBLIC_IP status.example.com $PUBLIC_IP upmeter.example.com EOF "