The module lifecycle stage: Preview
This guide describes the process of creating and modifying resources to manage a software-defined network.
Preparing the cluster for module use
Initial infrastructure setup:
-
For creating additional networks based on tagged VLANs:
- Allocate VLAN ID ranges on the data center switches and configure them on the corresponding switch interfaces.
- Select physical interfaces on the nodes for subsequent configuration of tagged VLAN interfaces. You can reuse interfaces already used by the DKP local network.
-
For creating additional networks based on direct, untagged access to a network interface:
- Reserve separate physical interfaces on the nodes and connect them into a single local network at the data center level.
After enabling the module, NodeNetworkInterface resources will automatically appear in the cluster, reflecting the current state of the nodes.
To check for resources, use the command:
d8 k get nodenetworkinterfaceExample output:
NAME MANAGEDBY NODE TYPE IFNAME IFINDEX STATE AGE
virtlab-ap-0-nic-1c61b4a68c2a Deckhouse virtlab-ap-0 NIC eth1 3 Up 35d
virtlab-ap-0-nic-fc34970f5d1f Deckhouse virtlab-ap-0 NIC eth0 2 Up 35d
virtlab-ap-1-nic-1c61b4a6a0e7 Deckhouse virtlab-ap-1 NIC eth1 3 Up 35d
virtlab-ap-1-nic-fc34970f5c8e Deckhouse virtlab-ap-1 NIC eth0 2 Up 35d
virtlab-ap-2-nic-1c61b4a6800c Deckhouse virtlab-ap-2 NIC eth1 3 Up 35d
virtlab-ap-2-nic-fc34970e7ddb Deckhouse virtlab-ap-2 NIC eth0 2 Up 35d
When discovering node interfaces, the controller affixes the following labels, which are service labels:
labels:
network.deckhouse.io/interface-mac-address: fa163eebea7b
network.deckhouse.io/interface-type: VLAN
network.deckhouse.io/vlan-id: 900
network.deckhouse.io/node-name: worker-01
annotations:
network.deckhouse.io/heritage: NetworkControllerIn this example, each cluster node has two network interfaces: eth0 (DKP local network) and eth1 (dedicated interface for additional networks).
Next, you need to label the reserved interfaces with an appropriate tag for additional networks:
d8 k label nodenetworkinterface virtlab-ap-0-nic-1c61b4a68c2a nic-group=extra
d8 k label nodenetworkinterface virtlab-ap-1-nic-1c61b4a6a0e7 nic-group=extra
d8 k label nodenetworkinterface virtlab-ap-2-nic-1c61b4a6800c nic-group=extraAdditionally, to increase bandwidth, you can combine multiple physical interfaces into one virtual interface (Bond).
Notes A Bond interface can only be created between NIC interfaces that are located on the same physical or virtual host.
Example configuring Bond interface:
The nodenetworkinterface resource can be abbreviated to nni.
Set custom labels for interfaces that can be combined to create a Bond interface.
d8 k label nni right-worker-b23d3a26-5fb4b-f545g-nic-fa163efbde48 nni.example.com/bond-group=bond0
d8 k label nni right-worker-b23d3a26-5fb4b-f545g-nic-fa40asdxzx78 nni.example.com/bond-group=bond0Prepare the configuration for creating the interface and apply it:
apiVersion: network.deckhouse.io/v1alpha1
kind: NodeNetworkInterface
metadata:
name: nni-worker-01-bond0
spec:
nodeName: worker-01
type: Bond
heritage: Manual
bond:
bondName: bond0
memberNetworkInterfaces:
- labelSelector:
matchLabels:
network.deckhouse.io/node-name: worker-01 # This is a service label that needs to be combined with the Bond interface on a specific node.
nni.example.com/bond-group: bond0 # Custom label, we need to set it ourselves on selected interfaces.Example of checking the status of the created Bond interface
To obtain a list of Bond interfaces, use the command:
d8 k get nniExample output:
NAME MANAGEDBY NODE TYPE IFNAME IFINDEX STATE AGE
nni-worker-01-bond0 Manual worker-01-b23d3a26-5fb4b-5s9fp Bond bond0 76 Up 7m48s
...
To obtain information about the interface status, use the command:
d8 k get nni nni-worker-01-bond0 -o yamlExample of interface status:
apiVersion: network.deckhouse.io/v1alpha1
kind: NodeNetworkInterface
metadata:
...
status:
conditions:
- lastProbeTime: "2025-09-30T09:00:54Z"
lastTransitionTime: "2025-09-30T09:00:39Z"
message: Interface created
reason: Created
status: "True"
type: Exists
- lastProbeTime: "2025-09-30T09:00:54Z"
lastTransitionTime: "2025-09-30T09:00:39Z"
message: Interface is up and ready to send packets
reason: Up
status: "True"
type: Operational
deviceMAC: 6a:c7:ab:2a:a6:1e
groupedLinks:
- deviceMAC: fa:16:3e:92:14:40
type: NIC
ifIndex: 76
ifName: bond0
managedBy: Manual
operationalState: Up
permanentMAC: ""Configure and connect additional virtual networks for use in application pods
Administrative resources
ClusterNetwork
To create a network available to all projects, use the ClusterNetwork interface.
Example for a network based on tagged traffic:
apiVersion: network.deckhouse.io/v1alpha1
kind: ClusterNetwork
metadata:
name: my-cluster-network
spec:
type: VLAN
vlan:
id: 900
parentNodeNetworkInterfaces:
labelSelector:
matchLabels:
nic-group: extra # Manually applied label on NodeNetworkInterface resources.After creating the ClusterNetwork, you can check its status with the command:
d8 k get clusternetworks.network.deckhouse.io my-cluster-network -o yamlExample of the status of a ClusterNetwork resource
apiVersion: network.deckhouse.io/v1alpha1
kind: ClusterNetwork
metadata:
...
status:
bridgeName: d8-br-900
conditions:
- lastTransitionTime: "2025-09-29T14:39:20Z"
message: All node interface attachments are ready
reason: AllNodeInterfaceAttachmentsAreReady
status: "True"
type: AllNodeAttachementsAreReady
- lastTransitionTime: "2025-09-29T14:39:20Z"
message: Network is operational
reason: NetworkReady
status: "True"
type: Ready
nodeAttachementsCount: 1
observedGeneration: 1
readyNodeAttachementsCount: 1After creating a Network/ClusterNetwork, the controller will create a NodeNetworkInterfaceAttachment tracking resource to link it to a NodeNetworkInterface. You can check the status and readiness of your system by running the following commands:
d8 k get nnia
d8 k get nnia my-cluster-network-... -o yamlSample NodeNetworkInterfaceAttachment resource
apiVersion: network.deckhouse.io/v1alpha1
kind: NodeNetworkInterfaceAttachment
metadata:
...
finalizers:
- network.deckhouse.io/nni-network-interface-attachment
- network.deckhouse.io/pod-network-interface-attachment
generation: 1
name: my-cluster-network-...
...
spec:
networkRef:
kind: ClusterNetwork
name: my-cluster-network
parentNetworkInterfaceRef:
name: right-worker-b23d3a26-5fb4b-h2bkv-nic-fa163eebea7b
type: VLAN
status:
bridgeNodeNetworkInterfaceName: right-worker-b23d3a26-5fb4b-h2bkv-bridge-900
conditions:
- lastTransitionTime: "2025-09-29T14:39:06Z"
message: Vlan created
reason: VLANCreated
status: "True"
type: Exist
- lastTransitionTime: "2025-09-29T14:39:06Z"
message: Bridged successfully
reason: VLANBridged
status: "True"
type: Ready
nodeName: right-worker-b23d3a26-5fb4b-h2bkv
vlanNodeNetworkInterfaceName: right-worker-b23d3a26-5fb4b-h2bkv-vlan-900-60f3dcWhen you create a Network or ClusterNetwork resource with a VLAN type, the system first picks up the VLAN interface and connects it to the Bridge.
After both interfaces — VLAN and Bridge — appear in the system and switch to the Up state, the statuses of all NodeNetworkInterfaceAttachment will change to True.
To check the status of NodeNetworkInterface, use the command:
d8 k get nniExample output:
NAME MANAGEDBY NODE TYPE IFNAME IFINDEX STATE AGE
...
right-worker-b23d3a26-5fb4b-h2bkv-bridge-900 Deckhouse right-worker-b23d3a26-5fb4b-h2bkv Bridge d8-br-900 684 Up 14h
right-worker-b23d3a26-5fb4b-h2bkv-nic-fa163eebea7b Deckhouse right-worker-b23d3a26-5fb4b-h2bkv NIC ens3 2 Up 19d
right-worker-b23d3a26-5fb4b-h2bkv-vlan-900-60f3dc Deckhouse right-worker-b23d3a26-5fb4b-h2bkv VLAN ens3.900 683 Up 14h
...
Example for a network based on direct interface access:
apiVersion: network.deckhouse.io/v1alpha1
kind: ClusterNetwork
metadata:
name: my-cluster-network
spec:
type: Access
parentNodeNetworkInterfaces:
labelSelector:
matchLabels:
nic-group: extra # Manually applied label on NodeNetworkInterface resources.NetworkClass
The NetworkClass interface is used to allow users to create their own dedicated networks based on tagged traffic while preventing them from affecting the infrastructure. It provides:
- Restriction of the set of physical network devices on the nodes.
- Limitation of the VLAN ID ranges available to users.
Example:
apiVersion: network.deckhouse.io/v1alpha1
kind: NetworkClass
metadata:
name: my-network-class
spec:
vlan:
idPool:
- 600-800
- 1200
parentNodeNetworkInterfaces:
labelSelector:
matchLabels:
nic-group: extraIPAM: IP address pools for additional networks
The IPAM mechanism allows you to automatically allocate and assign IPv4 addresses for additional network interfaces of pods connected to cluster networks and project networks.
Principles and features of IPAM in DKP
For each required IP address, a namespaced object IPAddress (ClusterIPAddress) is created and used, which references the project network (Network) or cluster network (ClusterNetwork). The controller allocates an address from the pool and stores the result in status.address, status.network, status.routes of the IPAddress object. The agent on the node assigns the IP address and routes to the interface inside the pod and sets IPAddress.status.conditions[Attached] and status.usedByPods.
To protect against conflicts, a cluster-scoped object IPAddressLease is created, which reserves the IP address. When an IPAddress object is deleted, the corresponding IPAddressLease is marked as orphaned (using the status.orphaningTimestamp field) and holds the address for the time specified in the spec.ttl parameter (to avoid rapid reuse).
For usage details (including ipAddressNames/skipIPAssignment), see the User Guide.
Example of assigning IP addresses to additional network interfaces of pods connected to the project network
To allocate a pool of addresses for the project network (Network), create an IPAddressPool resource in the same namespace as the project network (subnets connected to the network).
To allocate a pool of addresses and assign them to network interfaces of pods connected to the project network, perform the following steps:
-
Create an address pool. To do this, use the IPAddressPool resource.
Example:
apiVersion: network.deckhouse.io/v1alpha1 kind: IPAddressPool metadata: name: my-net-pool namespace: my-namespace spec: leaseTTL: 1h pools: - network: 192.168.10.0/24 ranges: - 192.168.10.50-192.168.10.200 routes: - destination: 10.10.0.0/16 via: 192.168.10.1The
spec.pools[].rangesparameter is optional. If it is not specified, the entire CIDR fromspec.pools[].networkis considered available (except for network/broadcast addresses, see the behavior of/31and/32). -
Enable IPAM on the network. To do this, specify the parameters of the IPAddressPool created in the previous step in the
spec.ipam.ipAddressPoolRefparameter of the Network resource.Example:
apiVersion: network.deckhouse.io/v1alpha1 kind: Network metadata: name: my-network namespace: my-namespace spec: networkClass: my-network-class ipam: ipAddressPoolRef: kind: IPAddressPool name: my-net-pool
Example of assigning IP addresses to additional network interfaces of pods connected to the cluster network
To allocate an address pool for a cluster network (created using the ClusterNetwork resource), use the ClusterIPAddressPool resource.
To allocate a pool of addresses and assign them to network interfaces of pods connected to the cluster network, perform the following steps:
-
Create an address pool. To do this, use the ClusterIPAddressPool resource.
Example:
apiVersion: network.deckhouse.io/v1alpha1 kind: ClusterIPAddressPool metadata: name: public-net-pool spec: leaseTTL: 24h pools: - network: 203.0.113.0/24 ranges: - 203.0.113.10-203.0.113.200The
spec.pools[].rangesparameter is optional. If it is not specified, the entire CIDR fromspec.pools[].networkis considered available (except for network/broadcast addresses, see the behavior of/31and/32). -
Enable IPAM on the network. To do this, specify the parameters of the lusterIPAddressPool created in the previous step in the
spec.ipam.ipAddressPoolRefparameter of the ClusterNetwork resource:apiVersion: network.deckhouse.io/v1alpha1 kind: ClusterNetwork metadata: name: my-cluster-network spec: type: VLAN vlan: id: 900 parentNodeNetworkInterfaces: labelSelector: matchLabels: nic-group: extra ipam: ipAddressPoolRef: kind: ClusterIPAddressPool name: public-net-pool
Configuring physical interfaces for direct attachment to application pods
The UnderlayNetwork resource enables direct hardware device passthrough to pods via Kubernetes Dynamic Resource Allocation (DRA). This allows DPDK applications and other high-performance workloads to access physical network interfaces (PF/VF) directly, bypassing the kernel network stack.
Prerequisites for DPDK applications
Before configuring UnderlayNetwork resources, you need to prepare the worker nodes for DPDK applications:
Configuring hugepages
DPDK applications require hugepages for efficient memory management. Configure hugepages on all worker nodes using NodeGroupConfiguration:
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: hugepages-for-dpdk
spec:
nodeGroups:
- "*" # Apply to all node groups.
weight: 100
content: |
#!/bin/bash
echo "vm.nr_hugepages = 4096" > /etc/sysctl.d/99-hugepages.conf
sysctl -p /etc/sysctl.d/99-hugepages.confThis configuration sets vm.nr_hugepages = 4096 on all nodes, providing 8 GiB of hugepages (4096 pages × 2 MiB per page).
Configuring Topology Manager
For optimal performance, enable Topology Manager on NodeGroups of worker nodes where DPDK applications will run. This ensures that CPU, memory, and device resources are allocated from the same NUMA node.
Example NodeGroup configuration:
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
kubelet:
topologyManager:
enabled: true
policy: SingleNumaNode
scope: Container
nodeType: StaticFor more information, see:
Prerequisites
Before creating an UnderlayNetwork, ensure that:
- Physical network interfaces (NICs) are available on the nodes and are discovered as NodeNetworkInterface resources.
- The interfaces you plan to use are Physical Functions (PF), not Virtual Functions (VF).
- For Shared mode, the NICs must support SR-IOV.
Preparing NodeNetworkInterface resources
First, check which Physical Functions are available on your nodes:
d8 k get nni -l network.deckhouse.io/nic-pci-type=PFExample output:
NAME MANAGEDBY NODE TYPE IFNAME IFINDEX STATE VF/PF Binding Driver Vendor AGE
worker-01-nic-0000:17:00.0 Deckhouse worker-01 NIC ens3f0 3 Up PF NetDev ixgbe Intel 35d
worker-01-nic-0000:17:00.1 Deckhouse worker-01 NIC ens3f1 4 Up PF NetDev ixgbe Intel 35d
worker-02-nic-0000:17:00.0 Deckhouse worker-02 NIC ens3f0 3 Up PF NetDev ixgbe Intel 35d
worker-02-nic-0000:17:00.1 Deckhouse worker-02 NIC ens3f1 4 Up PF NetDev ixgbe Intel 35d
Label the interfaces that will be used for UnderlayNetwork:
d8 k label nni worker-01-nic-0000:17:00.0 nic-group=dpdk
d8 k label nni worker-01-nic-0000:17:00.1 nic-group=dpdk
d8 k label nni worker-02-nic-0000:17:00.0 nic-group=dpdk
d8 k label nni worker-02-nic-0000:17:00.1 nic-group=dpdkYou can check the PCI information and SR-IOV support status for each interface:
d8 k get nni worker-01-nic-0000:17:00.0 -o json | jq '.status.nic.pci.pf'Look for status.nic.pci.pf.sriov.supported to verify SR-IOV support.
Creating UnderlayNetwork in Dedicated mode
In Dedicated mode, each Physical Function is exposed as an exclusive device. This mode is suitable when:
- SR-IOV is not available or not needed.
- Each pod needs exclusive access to a complete PF.
Example configuration:
apiVersion: network.deckhouse.io/v1alpha1
kind: UnderlayNetwork
metadata:
name: dpdk-dedicated-network
spec:
mode: Dedicated
autoBonding: false
memberNodeNetworkInterfaces:
- labelSelector:
matchLabels:
nic-group: dpdkWhen autoBonding is set to true, all matched PFs on a node are grouped into a single DRA device, exposing all PFs to the pod as separate interfaces. When false, each PF is published as a separate DRA device.
Check the status of the created UnderlayNetwork:
d8 k get underlaynetwork dpdk-dedicated-network -o yamlExample status of UnderlayNetwork in Dedicated mode
apiVersion: network.deckhouse.io/v1alpha1
kind: UnderlayNetwork
metadata:
name: dpdk-dedicated-network
...
status:
observedGeneration: 1
conditions:
- message: All 2 member node network interface selectors have matches
observedGeneration: 1
reason: AllInterfacesAvailable
status: "True"
type: InterfacesAvailableCreating UnderlayNetwork in Shared mode
In Shared mode, Virtual Functions (VF) are created from Physical Functions (PF) using SR-IOV, allowing multiple pods to share the same hardware. This mode requires SR-IOV support on the NICs.
Example configuration:
apiVersion: network.deckhouse.io/v1alpha1
kind: UnderlayNetwork
metadata:
name: dpdk-shared-network
spec:
mode: Shared
autoBonding: true
memberNodeNetworkInterfaces:
- labelSelector:
matchLabels:
nic-group: dpdk
shared:
sriov:
enabled: true
numVFs: 8In this example:
mode: Sharedenables SR-IOV and VF creation.autoBonding: truegroups one VF from each matched PF into a single DRA device.shared.sriov.enabled: trueenables SR-IOV on selected PFs.shared.sriov.numVFs: 8creates 8 Virtual Functions per Physical Function.
The mode and autoBonding fields are immutable once set. Plan your configuration carefully before creating the resource.
After creating the UnderlayNetwork, monitor the SR-IOV configuration status:
d8 k get underlaynetwork dpdk-shared-network -o yamlExample status of UnderlayNetwork in Shared mode
apiVersion: network.deckhouse.io/v1alpha1
kind: UnderlayNetwork
metadata:
name: dpdk-shared-network
...
status:
observedGeneration: 1
sriov:
supportedNICs: 4
enabledNICs: 4
conditions:
- lastTransitionTime: "2025-01-15T10:30:00Z"
message: SR-IOV configured on 4 NICs
reason: SRIOVConfigured
status: "True"
type: SRIOVConfigured
- lastTransitionTime: "2025-01-15T10:30:05Z"
message: Interfaces are available for allocation
reason: InterfacesAvailable
status: "True"
type: InterfacesAvailableYou can verify that VFs have been created by checking NodeNetworkInterface resources:
d8 k get nni -l network.deckhouse.io/nic-pci-type=VFPreparing namespaces for UnderlayNetwork usage
Before users can request UnderlayNetwork devices in their pods, the namespace must be labeled to enable UnderlayNetwork support. This is an administrative task that should be done for namespaces where DPDK applications will run:
d8 k label namespace mydpdk direct-nic-access.network.deckhouse.io/enabled=""