The module lifecycle stage: Preview
Introduction
This guide describes the process of creating and modifying resources to manage a software-defined network.
Preparing the cluster for module use
Initial infrastructure setup:
-
For creating additional networks based on tagged VLANs:
- Allocate VLAN ID ranges on the data center switches and configure them on the corresponding switch interfaces.
- Select physical interfaces on the nodes for subsequent configuration of tagged VLAN interfaces. You can reuse interfaces already used by the DKP local network.
-
For creating additional networks based on direct, untagged access to a network interface:
- Reserve separate physical interfaces on the nodes and connect them into a single local network at the data center level.
After enabling the module, NodeNetworkInterface resources will automatically appear in the cluster, reflecting the current state of the nodes.
To check for resources, use the command:
d8 k get nodenetworkinterface
Example output:
NAME MANAGEDBY NODE TYPE IFNAME IFINDEX STATE AGE
virtlab-ap-0-nic-1c61b4a68c2a Deckhouse virtlab-ap-0 NIC eth1 3 Up 35d
virtlab-ap-0-nic-fc34970f5d1f Deckhouse virtlab-ap-0 NIC eth0 2 Up 35d
virtlab-ap-1-nic-1c61b4a6a0e7 Deckhouse virtlab-ap-1 NIC eth1 3 Up 35d
virtlab-ap-1-nic-fc34970f5c8e Deckhouse virtlab-ap-1 NIC eth0 2 Up 35d
virtlab-ap-2-nic-1c61b4a6800c Deckhouse virtlab-ap-2 NIC eth1 3 Up 35d
virtlab-ap-2-nic-fc34970e7ddb Deckhouse virtlab-ap-2 NIC eth0 2 Up 35d
When discovering node interfaces, the controller affixes the following labels, which are service labels:
labels:
network.deckhouse.io/interface-mac-address: fa163eebea7b
network.deckhouse.io/interface-type: VLAN
network.deckhouse.io/vlan-id: 900
network.deckhouse.io/node-name: worker-01
annotations:
network.deckhouse.io/heritage: NetworkController
In this example, each cluster node has two network interfaces: eth0 (DKP local network) and eth1 (dedicated interface for additional networks).
Next, you need to label the reserved interfaces with an appropriate tag for additional networks:
d8 k label nodenetworkinterface virtlab-ap-0-nic-1c61b4a68c2a nic-group=extra
d8 k label nodenetworkinterface virtlab-ap-1-nic-1c61b4a6a0e7 nic-group=extra
d8 k label nodenetworkinterface virtlab-ap-2-nic-1c61b4a6800c nic-group=extra
Additionally, to increase bandwidth, you can combine multiple physical interfaces into one virtual interface (Bond).
Notes A Bond interface can only be created between NIC interfaces that are located on the same physical or virtual host.
Example configuring Bond interface:
The nodenetworkinterface resource can be abbreviated to nni.
Set custom labels for interfaces that can be combined to create a Bond interface.
d8 k label nni right-worker-b23d3a26-5fb4b-f545g-nic-fa163efbde48 nni.example.com/bond-group=bond0
d8 k label nni right-worker-b23d3a26-5fb4b-f545g-nic-fa40asdxzx78 nni.example.com/bond-group=bond0
Prepare the configuration for creating the interface and apply it.
apiVersion: network.deckhouse.io/v1alpha1
kind: NodeNetworkInterface
metadata:
name: nni-worker-01-bond0
spec:
nodeName: worker-01
type: Bond
heritage: Manual
bond:
bondName: bond0
memberNetworkInterfaces:
- labelSelector:
matchLabels:
network.deckhouse.io/node-name: worker-01 # This is a service label that needs to be combined with the Bond interface on a specific node.
nni.example.com/bond-group: bond0 # Custom label, we need to set it ourselves on selected interfaces.
Example of checking the status of the created Bond interface
To obtain a list of Bond interfaces, use the command:
d8 k get nni
Example output:
NAME MANAGEDBY NODE TYPE IFNAME IFINDEX STATE AGE
nni-worker-01-bond0 Manual worker-01-b23d3a26-5fb4b-5s9fp Bond bond0 76 Up 7m48s
...
To obtain information about the interface status, use the command:
d8 k get nni nni-worker-01-bond0 -o yaml
Example of interface status:
apiVersion: network.deckhouse.io/v1alpha1
kind: NodeNetworkInterface
metadata:
...
status:
conditions:
- lastProbeTime: "2025-09-30T09:00:54Z"
lastTransitionTime: "2025-09-30T09:00:39Z"
message: Interface created
reason: Created
status: "True"
type: Exists
- lastProbeTime: "2025-09-30T09:00:54Z"
lastTransitionTime: "2025-09-30T09:00:39Z"
message: Interface is up and ready to send packets
reason: Up
status: "True"
type: Operational
deviceMAC: 6a:c7:ab:2a:a6:1e
groupedLinks:
- deviceMAC: fa:16:3e:92:14:40
type: NIC
ifIndex: 76
ifName: bond0
managedBy: Manual
operationalState: Up
permanentMAC: ""
Configure and connect additional virtual networks for use in application pods
Administrative resources
ClusterNetwork
To create a network available to all projects, use the ClusterNetwork interface.
Example for a network based on tagged traffic:
apiVersion: network.deckhouse.io/v1alpha1
kind: ClusterNetwork
metadata:
name: my-cluster-network
spec:
type: VLAN
vlan:
id: 900
parentNodeNetworkInterfaces:
labelSelector:
matchLabels:
nic-group: extra # Manually applied label on NodeNetworkInterface resources.
After creating the ClusterNetwork, you can check its status with the command:
d8 k get clusternetworks.network.deckhouse.io my-cluster-network -o yaml
Example of the status of a `ClusterNetwork' resource
apiVersion: network.deckhouse.io/v1alpha1
kind: ClusterNetwork
metadata:
...
status:
bridgeName: d8-br-900
conditions:
- lastTransitionTime: "2025-09-29T14:39:20Z"
message: All node interface attachments are ready
reason: AllNodeInterfaceAttachmentsAreReady
status: "True"
type: AllNodeAttachementsAreReady
- lastTransitionTime: "2025-09-29T14:39:20Z"
message: Network is operational
reason: NetworkReady
status: "True"
type: Ready
nodeAttachementsCount: 1
observedGeneration: 1
readyNodeAttachementsCount: 1
After creating a Network/ClusterNetwork, the controller will create a NodeNetworkInterfaceAttachment tracking resource to link it to a NodeNetworkInterface. You can check the status and readiness of your system by running the following commands:
d8 k get nnia
d8 k get nnia my-cluster-network-... -o yaml
Sample `NodeNetworkInterfaceAttachment` resource
apiVersion: network.deckhouse.io/v1alpha1
kind: NodeNetworkInterfaceAttachment
metadata:
...
finalizers:
- network.deckhouse.io/nni-network-interface-attachment
- network.deckhouse.io/pod-network-interface-attachment
generation: 1
name: my-cluster-network-...
...
spec:
networkRef:
kind: ClusterNetwork
name: my-cluster-network
parentNetworkInterfaceRef:
name: right-worker-b23d3a26-5fb4b-h2bkv-nic-fa163eebea7b
type: VLAN
status:
bridgeNodeNetworkInterfaceName: right-worker-b23d3a26-5fb4b-h2bkv-bridge-900
conditions:
- lastTransitionTime: "2025-09-29T14:39:06Z"
message: Vlan created
reason: VLANCreated
status: "True"
type: Exist
- lastTransitionTime: "2025-09-29T14:39:06Z"
message: Bridged successfully
reason: VLANBridged
status: "True"
type: Ready
nodeName: right-worker-b23d3a26-5fb4b-h2bkv
vlanNodeNetworkInterfaceName: right-worker-b23d3a26-5fb4b-h2bkv-vlan-900-60f3dc
When you create a Network or ClusterNetwork resource with a VLAN type, the system first picks up the VLAN interface and connects it to the Bridge.
After both interfaces — VLAN and Bridge — appear in the system and switch to the Up state, the statuses of all NodeNetworkInterfaceAttachment will change to True.
To check the status of NodeNetworkInterface, use the command:
d8 k get nni
Example output:
NAME MANAGEDBY NODE TYPE IFNAME IFINDEX STATE AGE
...
right-worker-b23d3a26-5fb4b-h2bkv-bridge-900 Deckhouse right-worker-b23d3a26-5fb4b-h2bkv Bridge d8-br-900 684 Up 14h
right-worker-b23d3a26-5fb4b-h2bkv-nic-fa163eebea7b Deckhouse right-worker-b23d3a26-5fb4b-h2bkv NIC ens3 2 Up 19d
right-worker-b23d3a26-5fb4b-h2bkv-vlan-900-60f3dc Deckhouse right-worker-b23d3a26-5fb4b-h2bkv VLAN ens3.900 683 Up 14h
...
Example for a network based on direct interface access:
apiVersion: network.deckhouse.io/v1alpha1
kind: ClusterNetwork
metadata:
name: my-cluster-network
spec:
type: Access
parentNodeNetworkInterfaces:
labelSelector:
matchLabels:
nic-group: extra # Manually applied label on NodeNetworkInterface resources.
NetworkClass
The NetworkClass interface is used to allow users to create their own dedicated networks based on tagged traffic while preventing them from affecting the infrastructure. It provides:
- Restriction of the set of physical network devices on the nodes.
- Limitation of the VLAN ID ranges available to users.
Example:
apiVersion: network.deckhouse.io/v1alpha1
kind: NetworkClass
metadata:
name: my-network-class
spec:
vlan:
idPool:
- 600-800
- 1200
parentNodeNetworkInterfaces:
labelSelector:
matchLabels:
nic-group: extra
Configuring physical interfaces for direct attachment to application pods
The UnderlayNetwork resource enables direct hardware device passthrough to pods via Kubernetes Dynamic Resource Allocation (DRA). This allows DPDK applications and other high-performance workloads to access physical network interfaces (PF/VF) directly, bypassing the kernel network stack.
Prerequisites for DPDK applications
Before configuring UnderlayNetwork resources, you need to prepare the worker nodes for DPDK applications:
Configuring hugepages
DPDK applications require hugepages for efficient memory management. Configure hugepages on all worker nodes using NodeGroupConfiguration:
apiVersion: deckhouse.io/v1alpha1
kind: NodeGroupConfiguration
metadata:
name: hugepages-for-dpdk
spec:
nodeGroups:
- "*" # Apply to all node groups
weight: 100
content: |
#!/bin/bash
echo "vm.nr_hugepages = 4096" > /etc/sysctl.d/99-hugepages.conf
sysctl -p /etc/sysctl.d/99-hugepages.conf
This configuration sets vm.nr_hugepages = 4096 on all nodes, providing 8 GiB of hugepages (4096 pages × 2 MiB per page).
Configuring Topology Manager
For optimal performance, enable Topology Manager on NodeGroups of worker nodes where DPDK applications will run. This ensures that CPU, memory, and device resources are allocated from the same NUMA node.
Example NodeGroup configuration:
apiVersion: deckhouse.io/v1
kind: NodeGroup
metadata:
name: worker
spec:
kubelet:
topologyManager:
enabled: true
policy: SingleNumaNode
scope: Container
nodeType: Static
For more information, see:
Prerequisites
Before creating an UnderlayNetwork, ensure that:
- Physical network interfaces (NICs) are available on the nodes and are discovered as
NodeNetworkInterfaceresources. - The interfaces you plan to use are Physical Functions (PF), not Virtual Functions (VF).
- For Shared mode, the NICs must support SR-IOV.
Preparing NodeNetworkInterface resources
First, check which Physical Functions are available on your nodes:
d8 k get nni -l network.deckhouse.io/nic-pci-type=PF
Example output:
NAME MANAGEDBY NODE TYPE IFNAME IFINDEX STATE VF/PF Binding Driver Vendor AGE
worker-01-nic-0000:17:00.0 Deckhouse worker-01 NIC ens3f0 3 Up PF NetDev ixgbe Intel 35d
worker-01-nic-0000:17:00.1 Deckhouse worker-01 NIC ens3f1 4 Up PF NetDev ixgbe Intel 35d
worker-02-nic-0000:17:00.0 Deckhouse worker-02 NIC ens3f0 3 Up PF NetDev ixgbe Intel 35d
worker-02-nic-0000:17:00.1 Deckhouse worker-02 NIC ens3f1 4 Up PF NetDev ixgbe Intel 35d
Label the interfaces that will be used for UnderlayNetwork:
d8 k label nni worker-01-nic-0000:17:00.0 nic-group=dpdk
d8 k label nni worker-01-nic-0000:17:00.1 nic-group=dpdk
d8 k label nni worker-02-nic-0000:17:00.0 nic-group=dpdk
d8 k label nni worker-02-nic-0000:17:00.1 nic-group=dpdk
You can check the PCI information and SR-IOV support status for each interface:
d8 k get nni worker-01-nic-0000:17:00.0 -o json | jq '.status.nic.pci.pf'
Look for status.nic.pci.pf.sriov.supported to verify SR-IOV support.
Creating UnderlayNetwork in Dedicated mode
In Dedicated mode, each Physical Function is exposed as an exclusive device. This mode is suitable when:
- SR-IOV is not available or not needed
- Each pod needs exclusive access to a complete PF
Example configuration:
apiVersion: network.deckhouse.io/v1alpha1
kind: UnderlayNetwork
metadata:
name: dpdk-dedicated-network
spec:
mode: Dedicated
autoBonding: false
memberNodeNetworkInterfaces:
- labelSelector:
matchLabels:
nic-group: dpdk
When autoBonding is set to true, all matched PFs on a node are grouped into a single DRA device, exposing all PFs to the pod as separate interfaces. When false, each PF is published as a separate DRA device.
Check the status of the created UnderlayNetwork:
d8 k get underlaynetwork dpdk-dedicated-network -o yaml
Example status of UnderlayNetwork in Dedicated mode
apiVersion: network.deckhouse.io/v1alpha1
kind: UnderlayNetwork
metadata:
name: dpdk-dedicated-network
...
status:
observedGeneration: 1
conditions:
- message: All 2 member node network interface selectors have matches
observedGeneration: 1
reason: AllInterfacesAvailable
status: "True"
type: InterfacesAvailable
Creating UnderlayNetwork in Shared mode
In Shared mode, Virtual Functions (VF) are created from Physical Functions (PF) using SR-IOV, allowing multiple pods to share the same hardware. This mode requires SR-IOV support on the NICs.
Example configuration:
apiVersion: network.deckhouse.io/v1alpha1
kind: UnderlayNetwork
metadata:
name: dpdk-shared-network
spec:
mode: Shared
autoBonding: true
memberNodeNetworkInterfaces:
- labelSelector:
matchLabels:
nic-group: dpdk
shared:
sriov:
enabled: true
numVFs: 8
In this example:
mode: Sharedenables SR-IOV and VF creationautoBonding: truegroups one VF from each matched PF into a single DRA deviceshared.sriov.enabled: trueenables SR-IOV on selected PFsshared.sriov.numVFs: 8creates 8 Virtual Functions per Physical Function
The mode and autoBonding fields are immutable once set. Plan your configuration carefully before creating the resource.
After creating the UnderlayNetwork, monitor the SR-IOV configuration status:
d8 k get underlaynetwork dpdk-shared-network -o yaml
Example status of UnderlayNetwork in Shared mode
apiVersion: network.deckhouse.io/v1alpha1
kind: UnderlayNetwork
metadata:
name: dpdk-shared-network
...
status:
observedGeneration: 1
sriov:
supportedNICs: 4
enabledNICs: 4
conditions:
- lastTransitionTime: "2025-01-15T10:30:00Z"
message: SR-IOV configured on 4 NICs
reason: SRIOVConfigured
status: "True"
type: SRIOVConfigured
- lastTransitionTime: "2025-01-15T10:30:05Z"
message: Interfaces are available for allocation
reason: InterfacesAvailable
status: "True"
type: InterfacesAvailable
You can verify that VFs have been created by checking NodeNetworkInterface resources:
d8 k get nni -l network.deckhouse.io/nic-pci-type=VF
Preparing namespaces for UnderlayNetwork usage
Before users can request UnderlayNetwork devices in their pods, the namespace must be labeled to enable UnderlayNetwork support. This is an administrative task that should be done for namespaces where DPDK applications will run:
d8 k label namespace mydpdk direct-nic-access.network.deckhouse.io/enabled=""