The module lifecycle stage: General Availability
The module has requirements for installation
The module performs automated cluster security checks according to the CIS Kubernetes Benchmark specification.
Version
The module implements checks according to CIS Kubernetes Benchmark v1.23 specification.
This documentation describes the external operator-trivy module available starting from DKP 1.75.
DKP 1.74 and earlier used a built-in module with Trivy v0.55.2.
Component versions:
| Component | Version |
|---|---|
| Trivy | v0.67.2 |
| Trivy Operator | v0.29.0 |
| k8s-node-collector | v0.3.1 |
About version freshness. The versions in this table describe the current module release (updated together with the module). If you need a “source of truth” for a particular cluster, check the actually running images:
d8 k -n d8-operator-trivy get pods -o json | \
jq -r '.items[] | .metadata.name as $p | .spec.containers[] | "\($p)\t\(.name)\t\(.image)"'Check categories
CIS Kubernetes Benchmark checks are grouped into the following categories:
| Category | CIS Section | Identifier | Description |
|---|---|---|---|
| Control Plane | 1.x | AVD-KCV-* | API server, controller manager, scheduler configuration |
| etcd | 2.x | AVD-KCV-* | etcd security settings |
| Control Plane | 3.x | — | Authentication and logging (manual checks) |
| Worker Nodes | 4.x | AVD-KCV-* | Kubelet configuration and file permissions |
| Policies | 5.x | AVD-KSV-* | RBAC, Pod Security Standards, network policies |
Exclusions
In Deckhouse Kubernetes Platform (DKP), some checks are disabled or excluded from reports. This is due to platform architecture and system component requirements.
Note: this document describes the current CIS implementation in DKP and applicable exclusions. CIS results depend on cluster configuration and installed components, therefore 100% PASS across all controls is not guaranteed.
Globally disabled checks
The following checks are disabled for all cluster resources:
CIS 1.2.1 — Anonymous auth (AVD-KCV-0001)
Check: Ensure that the --anonymous-auth argument is set to false.
Status: Completely disabled.
Reason: The Trivy/defsec CIS 1.2.1 implementation is based on inspecting the legacy --anonymous-auth flag. On newer Kubernetes versions, anonymous authentication configuration may be managed differently (for example, via AuthenticationConfiguration), therefore a flag-based check may produce false FAIL/WARN results (including in kube-bench). For auditing, rely on the actual kube-apiserver configuration and access controls in your environment.
CIS 1.2.13 — SecurityContextDeny (AVD-KCV-0013)
Check: Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used.
Status: Completely disabled.
Reason: The SecurityContextDeny admission controller was deprecated and completely removed in Kubernetes v1.30. This controller blocked pods with privileged settings, but its approach was too coarse and didn’t allow granular policy configuration.
Alternative in DKP: Deckhouse uses Pod Security Standards implemented in the admission-policy-engine module based on OPA Gatekeeper. This allows:
- Applying
Privileged,Baseline, orRestrictedpolicies at namespace level - Creating exceptions for specific workloads
- Flexible rule configuration per organization requirements
Checks disabled for system namespaces
The following checks are disabled for kube-system and d8-* namespaces (all Deckhouse namespaces start with d8- prefix).
These checks are fully enforced for user namespaces and help identify insecure configurations.
Why do system components need privileges?
A Kubernetes cluster requires components that work at a low level: network management, storage, host resource monitoring. These components by their nature cannot run in an isolated container without privileges.
| CIS ID | Check ID | Check | Examples in DKP |
|---|---|---|---|
| 5.2.2 | AVD-KSV-0017 | Minimize the admission of privileged containers | DaemonSet/d8-cni-cilium/agent (privileged: true), DaemonSet/d8-cloud-instance-manager/fencing-agent-* (privileged: true) |
| 5.2.3 | AVD-KSV-0010 | Minimize the admission of containers wishing to share the host process ID namespace | DaemonSet/d8-monitoring/node-exporter (hostPID: true), DaemonSet/d8-monitoring/ebpf-exporter (hostPID: true) |
| 5.2.5 | AVD-KSV-0009 | Minimize the admission of containers wishing to share the host network namespace | DaemonSet/d8-monitoring/node-exporter (hostNetwork: true) |
| 5.2.6 | AVD-KSV-0001 | Minimize the admission of containers with allowPrivilegeEscalation | DaemonSet/d8-cni-cilium/agent (allowPrivilegeEscalation: true), DaemonSet/d8-istio/ztunnel (allowPrivilegeEscalation: true) |
| 5.2.7 | AVD-KSV-0012 | Minimize the admission of root containers | DaemonSet/d8-istio/ztunnel (runAsUser: 0) |
| 5.2.8 | AVD-KSV-0022 | Minimize the admission of containers with the NET_RAW capability | DaemonSet/d8-istio/ztunnel (capabilities: NET_RAW) |
| 5.2.12 | AVD-KSV-0023 | Minimize the admission of HostPath volumes | DaemonSet/d8-monitoring/node-exporter (hostPath: /, /etc/containerd, /var/run/node-exporter-textfile) |
| 5.2.13 | AVD-KSV-0024 | Minimize the admission of containers which use HostPorts | DaemonSet/d8-ingress-nginx/controller-* (hostPort: 80/443 in HostPort/Failover modes) |
Partial exclusions
Specific exclusions apply to individual resources:
CIS 5.7.4 — Default namespace (AVD-KSV-0110)
Check: The default namespace should not be used.
Exclusion: Resource default/service-kubernetes.
Reason: The kubernetes service in the default namespace is a built-in Kubernetes API Server service. It’s created automatically during cluster initialization and cannot be moved to another namespace. This is standard Kubernetes behavior documented in official documentation.
CIS 5.7.3 — Security Context (AVD-KSV-0020, AVD-KSV-0021)
Check: Apply Security Context to Your Pods and Containers.
Exclusion: ReplicaSets in d8-* namespaces.
Reason: Checks require runAsUser > 10000 and runAsGroup > 10000 (using unprivileged UID/GID). In DKP there are system workloads that run as root (runAsUser/runAsGroup = 0) or do not set runAsUser/runAsGroup explicitly — this can be required for host filesystem compatibility and low-level operations.
- Compatibility with host file permissions
- Correct operation with volumes where permissions are already configured
- Alignment with standard system service practices
How to check which objects are “below the 5.7.3 threshold” in your cluster (readable format):
Short summary (pod-level runAsUser/runAsGroup) for system namespaces d8-*, kube-system, kube-public, kube-node-lease:
d8 k get deploy,rs,ds,sts -A -o json | jq -r '
.items[]
| select(.metadata.namespace | test("^(d8-)|^(kube-system|kube-public|kube-node-lease)$"))
| . as $o
| ($o.spec.template.spec.securityContext.runAsUser // null) as $u
| ($o.spec.template.spec.securityContext.runAsGroup // null) as $g
| select((($u|tonumber? // -1) <= 10000) or (($g|tonumber? // -1) <= 10000))
| [
$o.kind,
$o.metadata.namespace,
$o.metadata.name,
("runAsUser=" + (($u|tostring) // "null")),
("runAsGroup=" + (($g|tostring) // "null"))
] | @tsv'Container-level detail to see what is set on pod vs container level:
d8 k get deploy,rs,ds,sts -A -o json | jq -r '
.items[]
| select(.metadata.namespace | test("^(d8-)|^(kube-system|kube-public|kube-node-lease)$"))
| . as $o
| ($o.spec.template.spec.securityContext.runAsUser // null) as $pu
| ($o.spec.template.spec.securityContext.runAsGroup // null) as $pg
| $o.spec.template.spec.containers[]?
| . as $c
| ($c.securityContext.runAsUser // null) as $cu
| ($c.securityContext.runAsGroup // null) as $cg
| select((($pu|tonumber? // 999999) <= 10000) or (($pg|tonumber? // 999999) <= 10000) or (($cu|tonumber? // 999999) <= 10000) or (($cg|tonumber? // 999999) <= 10000))
| [
$o.kind,
$o.metadata.namespace,
$o.metadata.name,
("pod.runAsUser=" + (($pu|tostring) // "null")),
("pod.runAsGroup=" + (($pg|tostring) // "null")),
("container=" + $c.name),
("container.runAsUser=" + (($cu|tostring) // "null")),
("container.runAsGroup=" + (($cg|tostring) // "null"))
] | @tsv'Checks excluded from metrics
The following checks are performed, but their results are not displayed in Prometheus metrics or on the Grafana dashboard. This is intentional so the dashboard reflects real security issues, not false positives.
Important: Results of these checks are still saved in ClusterComplianceReport and available for auditing.
CIS 5.1.2 — Access to secrets (AVD-KSV-0041)
Check: Minimize access to secrets.
What it checks: Finds all Roles/ClusterRoles with get, list, or watch permissions on the secrets resource.
Why excluded from metrics:
In the original CIS Benchmark, this check is marked as type: manual and scored: false because:
- It’s impossible to automatically determine if secrets access is legitimate
- Any controller working with TLS or credentials requires such access
Examples of roles with legitimate secrets access in DKP (non-exhaustive):
d8:ingress-nginx:kruise-role— managing secrets for ingress controllersd8:node-manager:machine-controller-manager— access to cloud provider credentialsd8:operator-prometheus— Prometheus operator ClusterRole reads/creates secretsRole/d8-log-shipper/log-shipper— log-shipper Role reads secrets ind8-log-shipper
How to see the full list of roles with secrets access:
# All Roles/ClusterRoles that have secrets access (get/list/watch)
d8 k get clusterroles,roles -A -o json | jq -r '
.items[]
| select(any(.rules[]?;
((.resources // []) | index("secrets")) != null
and (
((.verbs // []) | index("get")) != null
or ((.verbs // []) | index("list")) != null
or ((.verbs // []) | index("watch")) != null
)
))
| "\(.kind)\t\(.metadata.namespace // "-")\t\(.metadata.name)"'
# Quick lookup of bindings by role name (example)
role_name="d8:operator-prometheus"
d8 k get clusterrolebindings,rolebindings -A | grep -F "$role_name"How to analyze results:
# Get all roles with secrets access
d8 k get clustercompliancereports.aquasecurity.github.io cis -ojson | \
jq '.status.detailReport.results | map(select(.id == "5.1.2")) | .[].checks'Analyze the list and ensure that:
- All roles belong to known system components
- No user roles have excessive access
CIS 5.1.3 — Wildcard use in Roles (AVD-KSV-0044, AVD-KSV-0045, AVD-KSV-0046)
Check: Minimize wildcard use in Roles and ClusterRoles.
What it checks: Finds roles with wildcards (*) in resources, verbs, or apiGroups fields.
Why excluded from metrics:
In the original CIS Benchmark, this check is marked as type: manual and scored: false because:
cluster-admin— standard Kubernetes role, contains wildcards- Operators often require broad access to manage diverse resources
Examples of legitimate wildcard use in DKP:
cluster-admin— full access for administratorsdeckhouseClusterRole — managing all Deckhouse modules- CRD operators — access to all resources in their API group
How to analyze results:
# Get all roles with wildcards
d8 k get clustercompliancereports.aquasecurity.github.io cis -ojson | \
jq '.status.detailReport.results | map(select(.id == "5.1.3")) | .[].checks'Ensure wildcards are used only in:
- Kubernetes system roles (
cluster-admin,admin,edit,view) - Deckhouse component roles (
d8-*) - Known operator roles
Known issues
Kubelet TLS certificate check (CIS 4.2.10)
Checks AVD-KCV-0088 and AVD-KCV-0089 may show FAIL status. This is expected behavior: Deckhouse uses automatic kubelet certificate rotation (RotateKubeletServerCertificate) instead of static --tls-cert-file and --tls-private-key-file files.
See control-plane-manager module FAQ for details.
Manual checks
The following checks require manual verification and are not automated. This aligns with the original CIS Benchmark specification where they are marked as Manual.
| CIS ID | Check | Recommendations |
|---|---|---|
| 3.1.1 | Client certificate authentication should not be used for users | Configure OIDC authentication using the user-authn module. |
| 3.2.1 | Ensure that a minimal audit policy is created | Deckhouse configures a baseline audit policy automatically. See audit documentation. |
| 3.2.2 | Ensure that the audit policy covers key security concerns | Review your audit policy against your organization’s security requirements. |
| 5.3.1 | Ensure that the CNI in use supports Network Policies | Deckhouse uses CNI with Network Policy support (Cilium or Flannel with Calico). |
| 5.4.1 | Prefer using secrets as files over secrets as environment variables | Application architecture recommendation. Audit your workloads. |
| 5.4.2 | Consider external secret storage | Recommended for DKP: use Stronghold (Vault-compatible secret storage) and keep secrets outside Kubernetes. See Stronghold documentation. To sync secrets into Kubernetes, use the secrets-store-integration module. |
| 5.5.1 | Configure Image Provenance using ImagePolicyWebhook | Use the admission-policy-engine module with image signature verification policies. |
| 5.7.1 | Create administrative boundaries between resources using namespaces | Review your namespace structure according to organizational requirements. |
Mapping to kube-bench
DKP uses Trivy Operator for CIS checks instead of the kube-bench utility. Check mapping:
| Check type | Identifier | Execution mechanism | kube-bench equivalent |
|---|---|---|---|
| Control Plane, Worker Nodes | AVD-KCV-* | k8s-node-collector executes commands on nodes | master/node checks |
| Security policies | AVD-KSV-* | Trivy analyzes Kubernetes objects | Not available |
Use CIS Control ID (e.g., 1.2.1, 5.2.2) to compare results.
Viewing results
Grafana
Check results are available on the Security / CIS Kubernetes Benchmark dashboard.
Command line
Get all results:
d8 k get clustercompliancereports.aquasecurity.github.io cis -ojson | \
jq '.status.detailReport.results'Get only failed checks:
d8 k get clustercompliancereports.aquasecurity.github.io cis -ojson | \
jq '.status.detailReport.results | map(select(.checks | map(.success) | all | not))'Get results for a specific check:
d8 k get clustercompliancereports.aquasecurity.github.io cis -ojson | \
jq '.status.detailReport.results | map(select(.id == "5.7.3"))'Check schedule
CIS Benchmark checks run:
- every 6 hours (cron:
0 */6 * * *); - on operator startup.