If the infrastructure where Deckhouse Kubernetes Platform is running has requirements to limit host-to-host network communications, the following conditions must be met:

  • Tunneling mode for traffic between pods is enabled (configuration for CNI Cilium, configuration for CNI Flannel).
  • Traffic between podSubnetCIDR encapsulated within a VXLAN is allowed (if inspection and filtering of traffic within a VXLAN tunnel is performed).
  • If there is integration with external systems (e.g. LDAP, SMTP or other external APIs), it is required to allow network communication with them.
  • Local network communication is fully allowed within each individual cluster node.
  • Inter-node communication is allowed on the ports shown in the tables on the current page. Note that most ports are in the 4200-4299 range. When new platform components are added, they will be assigned ports from this range (if it is possible).

Master to master nodes traffic

Port Protocol Purpose
2379, 2380 TCP etcd replication
4200 TCP Cluster API webhook handler
4201 TCP VMware Cloud Director cloud provider webhook handler
4223 TCP Deckhouse controller webhook handler

Master to nodes traffic

Port Protocol Purpose
22 TCP SSH for Static nodes bootstrapping by static provider
10250 TCP kubelet
4221 TCP bashible apiserver for delivering node configurations
4227 TCP runtime-audit-engine webhook handler

Nodes to masters traffic

Port Protocol Purpose
4234 UDP NTP for time synchronization between nodes
6443 TCP kube-apiserver for controllers working in node’s host network namespace
4203 TCP machine-controller-manager metrics
4219 TCP Proxy for registry packages registry-packages-proxy
4222 TCP Deckhouse controller metrics

Nodes to nodes traffic

Port Protocol Purpose
  ICMP ICMP for node-to-node connectivity monitoring
7000-7999 TCP sds-replicated-volume DRBD replication
8469, 8472 UDP VXLAN for pod-to-pod traffic encapsulation
4204 TCP Deckhouse controller debug
4205 TCP ebpf-exporter metrics
4206 TCP node-exporter module metrics
4207, 4208 TCP ingress-nginx controller metrics for HostWithFailover inlet
4209 TCP Kubernetes control plane metrics
4210 TCP kube-proxy metrics
4211 TCP Cluster API metrics
4212 TCP runtime-audit-engine module metrics
4213 TCP kube-router metrics
9695 TCP sds-node-configurator node agent metrics
3367 TCP API of the sds-replicated-volume module node agent
9942 TCP sds-replicated-volume node agent metrics
49152, 49153 TCP Deckhouse Virtualization Platform VM live migration port
4218, 4225 TCP metallb and l2-load-balancer speakers memberlist ports
4218, 4225 UDP metallb and l2-load-balancer speakers memberlist ports
4220, 4226 TCP metallb and l2-load-balancer speakers metrics
4224 TCP node-local-dns metrics
4240 TCP CNI Cilium agent node-to-node healthcheck
4241 TCP CNI Cilium agent metrics
4242 TCP CNI Cilium operator metrics
4244 TCP cilium-hubble API

External traffic to masters

Port Protocol Purpose
22, 22322 TCP SSH for Deckhouse Kubernetes Platform initialization
6443 TCP Direct access to the apiserver

External traffic to frontends

Port Protocol Purpose
80, 443 TCP Application ports for requests to Ingress controllers over HTTP and HTTPS. Note that these ports are configurable in IngressNginxController resource and may vary in different setups
5416 UDP OpenVPN
5416 TCP OpenVPN
10256 TCP healthcheck port for external balancers
30000-32767 TCP NodePort range

External traffic for all nodes

Port Protocol Purpose
53 UDP DNS
53 TCP DNS
123 UDP NTP for external time synchronization
443 TCP Container registry