Cluster Configuration Planning
Before installing the virtualization platform, you need to plan its parameters:
- Choose the platform edition and release channel:
-
Determine IP subnets:
-
Subnet used by the nodes to communicate with each other.
This is the only subnet that actually exists in the organization’s network and is routable within your infrastructure. - Pod subnet (
podSubnetCIDR). - Service subnet (
serviceSubnetCIDR). - Subnets for virtual machine addresses (
virtualMachineCIDRs).
The node subnet must be a real network in your datacenter. The other subnets are virtual networks inside the cluster. They must not be routable outside the cluster and must not be advertised to either the public network or the organization’s network. You do not need to allocate separate VLANs or physical segments for them; it is enough to choose free private address ranges that do not overlap with existing networks.
Example of choosing such subnets:
podSubnetCIDR: 10.88.0.0/16serviceSubnetCIDR: 10.99.0.0/16virtualMachineCIDRs: 10.66.10.0/24
-
-
Decide on the nodes where the Ingress controller will be deployed.
- Specify the public domain for the cluster:
- A common practice is to use a wildcard domain that resolves to the address of the node with the Ingress controller;
- The domain template for applications in this case will be
%s.<public wildcard domain of the cluster>; -
For test clusters, you can use a universal wildcard domain from the sslip.io service.
The domain used in the template must not coincide with the domain specified in the
clusterDomainparameter. For example, ifclusterDomain: cluster.local(the default value) is used, thenpublicDomainTemplatecannot be%s.cluster.local.
- Choose the storage to be used:
- You can select a storage system from the supported list;
- Storage configuration will be done after the basic platform installation.
Node Preparation
- Check virtualization support:
- Make sure that Intel-VT (VMX) or AMD-V (SVM) virtualization support is enabled in the BIOS/UEFI on all cluster nodes.
- Install the operating system:
- Install one of the supported operating systems on each cluster node. Pay attention to the version and architecture of the system.
- Check access to the container image registry:
- Ensure that each node has access to a container image registry. By default, the installer uses the public registry
registry.deckhouse.io. Configure network connectivity and the necessary security policies to access this repository. - To check access, use the command
curl https://registry.deckhouse.io/v2/. The response should be:401 Unauthorized.
- Ensure that each node has access to a container image registry. By default, the installer uses the public registry
-
Add a technical user:
For automated installation and configuration of cluster components on the master node, a technical user must be created. The username can be anything; in this case, the name
dvpinstallwill be used.-
Create a user with administrator privileges:
sudo useradd -m -s /bin/bash -G sudo dvpinstall -
Set a password (make sure to save the password as it will be needed later):
sudo passwd dvpinstall -
(Optionally) For convenience during the installation, you can allow the
dvpinstalluser to runsudowithout a password:visudo # Add the following line: dvpinstall ALL=(ALL:ALL) NOPASSWD: ALL
-
-
Set up SSH access:
SSH access for the technical user must be configured on the master node.
On the installation machine:
-
Generate an SSH key that will be used to access the nodes:
ssh-keygen -t rsa -b 4096 -f dvp-install-key -N "" -C "dvp-node" -v -
Using the set password, allow SSH connections using the generated key:
ssh-copy-id -i dvp-install-key dvpinstall@<master-node-address>
-
After completing all the steps, the cluster nodes will be ready for further installation and platform configuration. Ensure each step is completed correctly to avoid issues during installation.