Three layouts are supported. Below is more information about each of them.
Standard
In this placement strategy, nodes do not have public IP addresses allocated to them; they use NAT gateway service in Yandex Cloud to connect to the Internet. NAT Gateway uses random public IP addresses from dedicated ranges. Because of this, it is impossible to whitelist the IP addresses of cloud resources located behind a specific NAT gateway on the side of other services.
Example of the layout configuration:
apiVersion: deckhouse.io/v1
kind: YandexClusterConfiguration
layout: Standard
sshPublicKey: "<SSH_PUBLIC_KEY>"
nodeNetworkCIDR: 192.168.12.13/24
existingNetworkID: <EXISTING_NETWORK_ID>
provider:
cloudID: <CLOUD_ID>
folderID: <FOLDER_ID>
serviceAccountJSON: |
{
"id": "id",
"service_account_id": "service_account_id",
"key_algorithm": "RSA_2048",
"public_key": "-----BEGIN PUBLIC KEY-----\nMIIwID....AQAB\n-----END PUBLIC KEY-----\n",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIE....1ZPJeBLt+\n-----END PRIVATE KEY-----\n"
}
masterNodeGroup:
replicas: 3
zones:
- ru-central1-a
- ru-central1-b
- ru-central1-d
instanceClass:
cores: 4
memory: 8192
imageID: <IMAGE_ID>
externalIPAddresses:
- "<ZONE_A_EXTERNAL_IP_MASTER_1>"
- "Auto"
- "Auto"
externalSubnetIDs:
- <ZONE_A_SUBNET_ID>
- <ZONE_B_SUBNET_ID>
- <ZONE_D_SUBNET_ID>
additionalLabels:
takes: priority
nodeGroups:
- name: worker
replicas: 2
zones:
- ru-central1-a
- ru-central1-b
instanceClass:
cores: 4
memory: 8192
imageID: <IMAGE_ID>
coreFraction: 50
externalIPAddresses:
- "Auto"
- "Auto"
externalSubnetIDs:
- <ZONE_A_SUBNET_ID>
- <ZONE_B_SUBNET_ID>
additionalLabels:
role: example
labels:
billing: prod
dhcpOptions:
domainName: test.local
domainNameServers:
- <DNS_SERVER_1>
- <DNS_SERVER_2>
WithoutNAT
In this layout, NAT (of any kind) is not used, and each node is assigned a public IP.
Caution! The cloud-provider-yandex module does not support Security Groups, so all cluster nodes will be available without connection restrictions.
Example of the layout configuration:
apiVersion: deckhouse.io/v1
kind: YandexClusterConfiguration
layout: WithoutNAT
provider:
cloudID: <CLOUD_ID>
folderID: <FOLDER_ID>
serviceAccountJSON: |
{
"id": "id",
"service_account_id": "service_account_id",
"key_algorithm": "RSA_2048",
"public_key": "-----BEGIN PUBLIC KEY-----\nMIIwID....AQAB\n-----END PUBLIC KEY-----\n",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIE....1ZPJeBLt+\n-----END PRIVATE KEY-----\n"
}
masterNodeGroup:
replicas: 3
instanceClass:
cores: 4
memory: 8192
imageID: <IMAGE_ID>
externalIPAddresses:
- "Auto"
- "Auto"
- "Auto"
externalSubnetIDs:
- <ZONE_A_SUBNET_ID>
- <ZONE_B_SUBNET_ID>
- <ZONE_D_SUBNET_ID>
zones:
- ru-central1-a
- ru-central1-b
- ru-central1-d
nodeGroups:
- name: worker
replicas: 2
instanceClass:
cores: 4
memory: 8192
imageID: <IMAGE_ID>
coreFraction: 50
externalIPAddresses:
- "<ZONE_A_EXTERNAL_IP_WORKER_1>"
- "Auto"
externalSubnetIDs:
- <ZONE_A_SUBNET_ID>
- <ZONE_B_SUBNET_ID>
zones:
- ru-central1-a
- ru-central1-b
sshPublicKey: "<SSH_PUBLIC_KEY>"
nodeNetworkCIDR: 192.168.12.13/24
existingNetworkID: <EXISTING_NETWORK_ID>
dhcpOptions:
domainName: test.local
domainNameServers:
- <DNS_SERVER_1>
- <DNS_SERVER_2>
WithNATInstance
In this placement strategy, Deckhouse creates a NAT instance inside new separate subnet and adds a rule to a route table containing a route to 0.0.0.0/0
with a NAT instance as the next hop.
This separate subnet is needed to avoid routing loop.
If the withNATInstance.internalSubnetID
parameter is set, the NAT instance will be created in this subnet.
If the withNATInstance.internalSubnetCIDR
parameter is set, then a new internal subnet will be created. The NAT instance will be created in this subnet.
Either withNATInstance.internalSubnetID
or withNATInstance.internalSubnetCIDR
parameter is required.
If the withNATInstance.externalSubnetID
is provided in addition to previous ones, the NAT instance will be attached to it via secondary interface.
Example of the layout configuration:
apiVersion: deckhouse.io/v1
kind: YandexClusterConfiguration
layout: WithNATInstance
withNATInstance:
natInstanceExternalAddress: <NAT_INSTANCE_EXTERNAL_ADDRESS>
internalSubnetID: <INTERNAL_SUBNET_ID>
externalSubnetID: <EXTERNAL_SUBNET_ID>
provider:
cloudID: <CLOUD_ID>
folderID: <FOLDER_ID>
serviceAccountJSON: |
{
"id": "id",
"service_account_id": "service_account_id",
"key_algorithm": "RSA_2048",
"public_key": "-----BEGIN PUBLIC KEY-----\nMIIwID....AQAB\n-----END PUBLIC KEY-----\n",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIE....1ZPJeBLt+\n-----END PRIVATE KEY-----\n"
}
masterNodeGroup:
replicas: 1
instanceClass:
cores: 4
memory: 8192
imageID: <IMAGE_ID>
externalIPAddresses:
- "Auto"
externalSubnetID: <EXTERNAL_SUBNET_ID>
zones:
- ru-central1-a
nodeGroups:
- name: worker
replicas: 1
instanceClass:
cores: 4
memory: 8192
imageID: <IMAGE_ID>
coreFraction: 50
externalIPAddresses:
- "Auto"
externalSubnetID: <EXTERNAL_SUBNET_ID>
zones:
- ru-central1-a
sshPublicKey: "<SSH_PUBLIC_KEY>"
nodeNetworkCIDR: 192.168.12.13/24
existingNetworkID: <EXISTING_NETWORK_ID>
dhcpOptions:
domainName: test.local
domainNameServers:
- <DNS_SERVER_1>
- <DNS_SERVER_2>