Cluster Configuration Planning

Before installing the virtualization platform, you need to plan its parameters:

  1. Choose the platform edition and release channel:
  2. Determine IP subnets:

    • Subnet used by the nodes to communicate with each other.
      This is the only subnet that actually exists in the organization’s network and is routable within your infrastructure.

    • Pod subnet (podSubnetCIDR).
    • Service subnet (serviceSubnetCIDR).
    • Subnets for virtual machine addresses (virtualMachineCIDRs).

    The node subnet must be a real network in your datacenter. The other subnets are virtual networks inside the cluster. They must not be routable outside the cluster and must not be advertised to either the public network or the organization’s network. You do not need to allocate separate VLANs or physical segments for them; it is enough to choose free private address ranges that do not overlap with existing networks.

    Example of choosing such subnets:

    • podSubnetCIDR: 10.88.0.0/16
    • serviceSubnetCIDR: 10.99.0.0/16
    • virtualMachineCIDRs: 10.66.10.0/24
  3. Decide on the nodes where the Ingress controller will be deployed.

  4. Specify the public domain for the cluster:
    • A common practice is to use a wildcard domain that resolves to the address of the node with the Ingress controller;
    • The domain template for applications in this case will be %s.<public wildcard domain of the cluster>;
    • For test clusters, you can use a universal wildcard domain from the sslip.io service.

      The domain used in the template must not coincide with the domain specified in the clusterDomain parameter. For example, if clusterDomain: cluster.local (the default value) is used, then publicDomainTemplate cannot be %s.cluster.local.

  5. Choose the storage to be used:

Node Preparation

  1. Check virtualization support:
    • Make sure that Intel-VT (VMX) or AMD-V (SVM) virtualization support is enabled in the BIOS/UEFI on all cluster nodes.
  2. Install the operating system:
  3. Check access to the container image registry:
    • Ensure that each node has access to a container image registry. By default, the installer uses the public registry registry.deckhouse.io. Configure network connectivity and the necessary security policies to access this repository.
    • To check access, use the command curl https://registry.deckhouse.io/v2/. The response should be: 401 Unauthorized.
  4. Add a technical user:

    For automated installation and configuration of cluster components on the master node, a technical user must be created. The username can be anything; in this case, the name dvpinstall will be used.

    • Create a user with administrator privileges:

      sudo useradd -m -s /bin/bash -G sudo dvpinstall
      
    • Set a password (make sure to save the password as it will be needed later):

      sudo passwd dvpinstall
      
    • (Optionally) For convenience during the installation, you can allow the dvpinstall user to run sudo without a password:

      visudo   
      # Add the following line:    
      dvpinstall ALL=(ALL:ALL) NOPASSWD: ALL
      
  5. Set up SSH access:

    SSH access for the technical user must be configured on the master node.

    On the installation machine:

    • Generate an SSH key that will be used to access the nodes:

      ssh-keygen -t rsa -b 4096 -f dvp-install-key -N "" -C "dvp-node" -v
      
    • Using the set password, allow SSH connections using the generated key:

      ssh-copy-id -i dvp-install-key dvpinstall@<master-node-address>
      

After completing all the steps, the cluster nodes will be ready for further installation and platform configuration. Ensure each step is completed correctly to avoid issues during installation.