Version 1.72
Important
- All DKP components will be restarted during the update.
- To use experimental modules in the cluster, you now need to explicitly enable the allowExperimentalModules parameter. By default, experimental modules are disabled. Modules that were enabled before the update will not be automatically disabled. However, if an experimental module enabled prior to the update is manually disabled during the update process, you will need to grant permission to use experimental modules again in order to re-enable it.
- If there are WireGuard interfaces on the cluster nodes, you must update the Linux kernel to version 6.8 or higher.
Major changes
-
Added a new registry module and the ability to adjust container registry parameters without restarting all DKP components. Two modes for working with the container registry are now available in DKP:
Unmanaged(the approach used in previous versions) andDirect(a new mode). InDirectmode, DKP creates a virtual container registry address in the cluster that all DKP components use. Changing the container registry address (for example, switching to a different registry or changing the DKP edition in the cluster) in this mode does not trigger a forced restart of all DKP components. -
Added support for recursive DNS servers (configured via the recursiveSettings section of the
cert-managermodule). They are used to verify the existence of a DNS record before starting the ACME DNS-01 domain ownership validation process. This is useful if the same domain is used both publicly and within the cluster, or if the domain has dedicated authoritative DNS servers. -
Introduced separation of modules into critical and functional using the
criticalflag inmodule.yaml. Critical modules are started first. Functional modules are started after the bootstrap process is complete. Their tasks run in parallel and do not block the queue in case of failure. This speeds up cluster installation and improves fault tolerance when starting modules. -
You can now enable logging of all DNS queries (the enableLogs parameter of the
node-local-dnsmodule). -
In the
cloud-provider-vcdmodule, a new WithNAT layout has been added for cluster deployment. It automatically configures NAT and, if necessary, firewall rules for accessing nodes through a bastion host. It also supports bothNSX-TandNSX-V. This makes it possible to deploy a cluster in VMware Cloud Director without pre-configuring the environment (unlike theStandardlayout).
Security
-
Added the fields
user-authn.deckhouse.io/nameanduser-authn.deckhouse.io/preferred_usernameto Kubernetes audit log events. These fields display user claims from the OIDC provider, improving authentication monitoring and troubleshooting. -
Kubernetes versions 1.30–1.33 have been updated to the latest patch releases.
-
For the AWS provider, added the ability to disable the creation of default security groups (the disableDefaultSecurityGroup parameter). When disabled, security groups must be created manually and explicitly specified in AWSClusterConfiguration, AWSInstanceClass, and NodeGroup. This new feature provides greater control over security settings.
-
Added support for password policies for local users (configured in the passwordPolicy section). You can now enforce a minimum password complexity, set password expiration, require password rotation, prevent reuse of old passwords, and lock accounts after a specified number of failed login attempts. These changes allow administrators to centrally enforce password requirements and improve cluster security.
Component version updates
The following DKP components have been updated:
- Kubernetes control plane: 1.30.14, 1.31.11, 1.32.7, 1.33.3
cloud-provider-huaweicloud cloud-data-discoverer: v0.6.0node-manager capi-controller-manager: 1.10.4
Version 1.71
Important
-
Prometheus has been replaced with Deckhouse Prom++. If you want to keep using Prometheus, disable the
promppmodule manually before upgrading DKP by running the commandd8 system module disable prompp. -
Support for Kubernetes 1.33 has been added, while support for Kubernetes 1.28 has been discontinued. In future DKP releases, support for Kubernetes 1.29 will be removed. The default Kubernetes version (used when the
kubernetesVersionparameter is set toAutomatic) has been changed to 1.31. -
Upgrading the cluster to Kubernetes 1.31 requires a sequential update of all nodes, with each node drained. You can control how node updates requiring workload disruptions are applied using the
disruptionsparameter section. -
The built-in
snapshot-controllerandstatic-routing-managermodules will now be replaced with their external counterparts of the same name, sourced via ModuleSource deckhouse. -
The new version of Cilium requires nodes to run Linux kernel version 5.8 or newer. If any node in the cluster has a kernel older than 5.8, the Deckhouse Kubernetes Platform upgrade will be blocked. Cilium Pods will be restarted.
-
All DKP components will be restarted during the update.
Major changes
-
You can now enforce two-factor authentication for static users. This is configured via the
staticUsers2FAparameter section of theuser-authnmodule. -
Added support for GPUs on nodes. Three GPU resource sharing modes are now available: Exclusive (no sharing), TimeSlicing (time-based sharing), and MIG (a single GPU split into multiple instances). The NodeGroup spec.gpu parameter section is used to configure the GPU resource sharing mode. Using a GPU on a node requires installing the NVIDIA Container Toolkit and the GPU driver.
-
When enabling a module (with
d8 system module enable) or editing a ModuleConfig resource, a warning is now displayed if multiple module sources are found. In such a case, explicitly specify the module source using the source parameter in the module’s configuration. -
Improved error handling for module configuration. Module-related errors no longer block DKP operations. Instead, they are now displayed in the status fields of Module and ModuleRelease objects.
- Improved virtualization support:
- Added a provider for integration with Deckhouse Virtualization Platform (DVP), enabling deployment of DKP clusters on top of DVP.
- Added support for nested virtualization on nodes in the
cni-ciliummodule.
- The
node-managermodule now includes several enhancements for improved node reliability and manageability:- You can now prevent a node from restarting if it still hosts critical Pods (labeled with
pod.deckhouse.io/inhibit-node-shutdown). This can be necessary for workloads with stateful components, such as long-running data migrations. - Introduced API version
v1alpha2for the SSHCredential resource, where thesudoPasswordEncodedparameter allows specifying thesudopassword in Base64 format. - The
capiEmergencyBrakeparameter allows you to disable Cluster API (CAPI) in emergency scenarios, preventing potentially destructive changes. Its behavior is similar to the existingmcmEmergencyBrakesetting.
- You can now prevent a node from restarting if it still hosts critical Pods (labeled with
-
Added a pre-installation check to verify connectivity to the DKP container image registry.
-
Improved the log file rotation mechanism when using short-term log storage (via the
lokimodule). Added theLokiInsufficientDiskForRetentionalert to warn about insufficient disk space for log retention. -
The documentation now includes a reference for the Deckhouse CLI (
d8utility) commands and parameters. -
When using CEF encoding for collecting logs from Apache Kafka or socket sources, you can now configure auxiliary CEF fields such as Device Product, Device Vendor, and Device ID.
-
The
passwordHashfield in the NodeUser resource is no longer required. This allows you to create users without passwords — for example, in clusters that use external authentication systems (such as PAM or LDAP). - Added support for CRI Containerd v2 with CgroupsV2. The new version introduces a different configuration format and includes a mechanism to migrate between Containerd v1 and v2. You can change the CRI type used on nodes via the
cri.typeparameter and configure it usingcri.containerdV2.
Security
-
Container image signature verification is now available in DKP SE+. This feature is now supported in DKP SE+ and EE.
-
The
log-shipper,deckhouse-controller, andIstio(version 1.21) modules have been migrated to distroless builds. This improves security and ensures a more transparent and controlled build process. -
New audit rules have been added to track interactions with containerd. The following are now monitored: access to the
/run/containerd/containerd.socksocket, modifications to the/etc/containerdand/var/lib/containerddirectories and the/opt/deckhouse/bin/containerdfile. -
Known vulnerabilities have been fixed in the following modules:
loki,extended-monitoring,operator-prometheus,prometheus,prometheus-metrics-adapter,user-authn, andcloud-provider-zvirt.
Network
-
Added support for Istio version 1.25.2, which uses the Sail operator instead of the deprecated Istio Operator. Also added support for Kiali version 2.7, without Ambient Mesh support. Istio version 1.19 is now considered deprecated.
-
Added support for encrypting traffic between nodes and Pods using the WireGuard protocol (via the
encryption.modeparameter). -
Fixed the logic for determining service readiness in the ServiceWithHealthcheck resource. Previously, Pods without an IP address (for example, in
Pendingstate) could be mistakenly included in the load balancing list. -
Added support for the least-conn load balancing algorithm. This algorithm directs traffic to the service backend with the fewest active connections, improving performance for connection-heavy applications (such as WebSocket services). To use this algorithm, enable the
extraLoadBalancerAlgorithmsEnabledparameter in thecni-ciliummodule settings and use theservice.cilium.io/lb-algorithmannotation on the service and set it to a supported value: random, maglev, or least-conn. -
Fixed an issue in Cilium 1.17
cilium-operatorwhere IP addresses were not reused after aCiliumEndpointwas deleted. The issue was caused by improper cleanup of priority filters, which could lead to IP pool exhaustion in large clusters. -
Refined the list of ports used for networking:
- Added and updated:
4287/UDP: WireGuard port used for CNI Cilium traffic encryption.4295-4297/UDP: Used by thecni-ciliummodule for VXLAN encapsulation of inter-pod traffic in multiple nested virtualization — when DKP with thevirtualizationmodule enabled is deployed inside virtual machines that are also created in DKP with thevirtualizationmodule enabled.4298/UDP: Used by thecni-ciliummodule for VXLAN encapsulation of traffic between pods if the cluster was deployed on DKP version starting from 1.71 (for clusters deployed on DKP versions up to 1.71, see the note for ports4299/UDP,8469/UDP, and8472/UDP).4299/UDP: Port for clusters deployed on DKP versions 1.64–1.70. Used by thecni-ciliummodule for VXLAN encapsulation of traffic between pods. Updating DKP to newer versions will not change the port used unless thevirtualizationmodule is enabled.8469/UDP: Port for clusters deployed on DKP version 1.63 and below with thevirtualizationmodule enabled prior to DKP version 1.63. Used by thecni-ciliummodule for VXLAN encapsulation of traffic between pods. Updating DKP to newer versions will not change the occupied port8472/UDP: Port for clusters deployed on DKP version 1.63 and below. Used by thecni-ciliummodule for VXLAN encapsulation of traffic between pods. Updating DKP to newer versions will not change the occupied port if thevirtualizationmodule is not enabled. Note that in such clusters, enabling thevirtualizationmodule on DKP before version 1.70 changes the port:- Enabling the
virtualizationmodule on DKP version 1.63 and below will change it to8469/UDPand will not change with subsequent DKP updates - Enabling the
virtualizationmodule on DKP starting from version 1.64 will change it to4298/UDPand will not change with subsequent DKP updates
- Enabling the
- Removed:
49152,49153/TCP: Previously used for live migration of virtual machines (in thevirtualizationmodule). Migration now occurs over the Pod network.
- Added and updated:
Component version updates
The following DKP components have been updated:
cilium: 1.17.4golang.org/x/net: v0.40.0etcd: v3.6.1terraform-provider-azure: 3.117.1Deckhouse CLI: 0.13.2Falco: 0.41.1falco-ctl: 0.11.2gcpaudit: v0.6.0Grafana: 10.4.19Vertical pod autoscaler: 1.4.1dhctl-kube-client: v1.3.1cloud-provider-dynamix dynamix-common: v0.5.0cloud-provider-dynamix capd-controller-manager: v0.5.0cloud-provider-dynamix cloud-controller-manager: v0.4.0cloud-provider-dynamix cloud-data-discoverer: v0.6.0cloud-provider-huaweicloud huaweicloud-common: v0.5.0cloud-provider-huaweicloud caphc-controller-manager: v0.3.0cloud-provider-huaweicloud cloud-data-discoverer: v0.5.0registry-packages-containerdv2: 2.1.3registry-packages-containerdv2-runc: 1.3.0cilium: 1.17.4cilium envoy-bazel: 6.5.0cilium cni-plugins: 1.7.1cilium protoc: 30.2cilium grpc-go: 1.5.1cilium protobuf-go: 1.36.6cilium protoc-gen-go-json: 1.5.0cilium gops: 0.3.27cilium llvm: 18.1.8cilium llvm-build-cache: llvmorg-18.1.8-alt-p11-gcc11-v2-180225User-authn basic-auth-proxy go: 1.23.0Prometheus alerts-reciever go: 1.23.0Prometheus memcached_exporter: 0.15.3Prometheus mimir: 2.14.3Prometheus promxy: 0.0.93Extended-monitoring k8s-image-availability-exporter: 0.13.0Extended-monitoring x509-certificate-exporter: 3.19.1Cilium-hubble hubble-ui: 0.13.2Cilium-hubble hubble-ui-frontend-assets: 0.13.2
Version 1.70
Important
-
The
ceph-csimodule has been removed. Use thecsi-cephmodule instead. Deckhouse will not be updated as long asceph-csiis enabled in the cluster. Forcsi-cephmigration instructions, refer to the module documentation. -
Version 1.12 of the NGINX Ingress Controller has been added. The default controller version has been changed to 1.10. All Ingress controllers that do not have an explicitly specified version (via the
controllerVersionparameter in the IngressNginxController resource or thedefaultControllerVersionparameter in theingress-nginxmodule) will be restarted. -
The
falco_eventsmetric (from theruntime-audit-enginemodule) has been removed. Thefalco_eventsmetric was considered deprecated since DKP 1.68. Use thefalcosecurity_falcosidekick_falco_events_totalmetric instead. Dashboards and alerts based on thefalco_eventsmetric may stop working. -
All DKP components will be restarted during the update.
Major changes
- In the
Autoupdate mode, patch version updates (for example, fromv1.70.1tov1.70.2) are now applied taking into account the update windows, if they are set. Previously, in this update mode, only minor version updates (for example, fromv1.69.xtov1.70.x) were applied with consideration to update windows, while patch version updates were applied as they appeared on a release channel. - A node can now be rebooted if the corresponding Node object has the
update.node.deckhouse.io/rebootannotation set. - When cleaning up a static node, any local users created by Deckhouse Kubernetes Platform are now also removed.
- Added synchronization monitoring for Istio in multi-cluster configurations. A new alert
D8IstioRemoteClusterNotSyncedhas been introduced and triggers in the following cases:- The remote cluster is offline.
- The remote API endpoint is not reachable.
- The remote
ServiceAccounttoken is invalid or expired. - There is a TLS or certificate issue between the clusters.
- The
deckhouse-controller collect-debug-infocommand now also collects debug information forIstio, including:- Resources in the
d8-istionamespace. - CRDs from the
istio.ioandgateway.networking.k8s.iogroups. Istiologs.Sidecarlogs of a single randomly selected user application.
- Resources in the
- A new monitoring dashboard has been added to display OpenVPN certificate status. Upon expiration, server certificates will now be reissued, and client certificates will be removed. The following alerts have been added:
OpenVPNClientCertificateExpired: Warns about expired client certificates.OpenVPNServerCACertificateExpired: Warns about an expired OpenVPN CA certificate.OpenVPNServerCACertificateExpiringSoonandOpenVPNServerCACertificateExpiringInAWeek: Warn when an OpenVPN CA certificate is expiring in less than 30 or 7 days, respectively.OpenVPNServerCertificateExpired: Warns about an expired OpenVPN server certificate.OpenVPNServerCertificateExpiringSoonandOpenVPNServerCertificateExpiringInAWeek: Warn when an OpenVPN server certificate is expiring in less than 30 or 7 days, respectively.
- Monitoring dashboards have been renamed and updated:
- “L2LoadBalancer” renamed to “MetalLB L2”; pool and column filtering added.
- “Metallb” renamed to “MetalLB BGP”; pool and column filtering added. The ARP request panel has been removed.
- “L2LoadBalancer / Pools” renamed to “MetalLB / Pools”.
-
The
upmetermodule’s PVC size has been increased to accommodate data retention for 13 months. In some cases, the previous PVC size was insufficient. -
The ModuleSource resource status now includes information about module versions in the source.
-
The Module resource status now includes information about the module’s lifecycle stage. A module can move through the following stages in its lifecycle: Experimental, Preview, General Availability, and Deprecated. For details on module lifecycle stages and how to evaluate its stability, refer to the corresponding section in the documentation.
-
It is now possible to use stronger or more modern encryption algorithms (such as
RSA-3072,RSA-4096, orECDSA-P256) for control plane cluster certificates instead of the defaultRSA-2048. You can use theencryptionAlgorithmparameter in the ClusterConfiguration resource to configure this. -
The
deschedulermodule can now be configured to evict pods that are using local storage. Use theevictLocalStoragePodsparameter in the module configuration to adjust this. - You can now adjust the logging level of the Ingress controller using the
controllerLogLevelparameter in the IngressNginxController resource. The default log level isInfo. Controlling the logging level can help prevent log collector overload during Ingress controller restarts.
Security
-
The severity level of alerts indicating security policy violations has been raised from 7 to 3.
-
The configuration for
Yandex Cloud,Zvirt, andDynamixproviders now usesOpenTofuinstead ofTerraform. This enables easier provider updates, such as applying fixes for known vulnerabilities (CVEs). -
CVE vulnerabilities have been fixed in the following modules:
chrony,descheduler,dhctl,node-manager,registry-packages-proxy,falco,cni-cilium, andvertical-pod-autoscaler.
Component version updates
The following DKP components have been updated:
containerd: 1.7.27runc: 1.2.5go: 1.24.2, 1.23.8golang.org/x/net: v0.38.0mcm: v0.36.0-flant.23ingress-nginx: 1.12.1terraform-provider-aws: 5.83.1Deckhouse CLI: 0.12.1etcd: v3.5.21
Version 1.69
Important
-
Support for Kubernetes 1.32 has been added, while support for Kubernetes 1.27 has been discontinued. The default Kubernetes version has been changed to 1.30. In future DKP releases, support for Kubernetes 1.28 will be removed.
-
All DKP components will be restarted during the update.
Major changes
-
The
ceph-csimodule is now deprecated. Plan to migrate to thecsi-cephmodule instead. For details, refer to the Ceph documentation. -
You can now grant access to Deckhouse web interfaces using user names via the
auth.allowedUserEmailsfield. Access restriction is configured together with theauth.allowedUserGroupsparameter in configuration of the following modules with web interfaces:cilium-hubble,dashboard,deckhouse-tools,documentation,istio,openvpn,prometheus, andupmeter(example forprometheus). -
A new dashboard Cilium Nodes Connectivity Status & Latency has been added to Grafana in the
cni-ciliummodule. It helps monitor node network connectivity issues. The dashboard displays a connectivity matrix similar to thecilium-health statuscommand, using metrics that are already available in Prometheus. -
A new
D8KubernetesStaleTokensDetectedalert has been added in thecontrol-plane-managermodule that is triggered when stale service account tokens are detected in the cluster. -
You can now create a Project from an existing namespace and adopt existing objects into it. To do this, annotate the namespace and its resources with
projects.deckhouse.io/adopt. This lets you switch to using Projects without recreating cluster resources. -
A
Terminatingstatus has been added to ModuleSource and ModuleRelease resources. The new status will be displayed when an attempt to delete one of them fails. -
The installer container now automatically configures cluster access after a successful bootstrap. A
kubeconfigfile is generated in~/.kube/config, and a local TCP proxy is set up through an SSH tunnel. This allows you to use kubectl locally right away without manually connecting to the control-plane node via SSH. -
Changes to Kubernetes resources in multi-cluster and federation setups are now tracked directly via Kubernetes API. This enables faster synchronization between clusters and eliminates the use of outdated certificates. In addition, mounting of ConfigMap and Secret resources into Pods has been removed to eliminate family system compromise risks.
-
A new dynamicforward plugin has been added to CoreDNS, improving DNS query processing in the cluster. It integrates with
node-local-dns, continuously monitorskube-dnsendpoints, and automatically updates the list of DNS forwarders. If the control-plane node is unavailable, DNS queries are still forwarded to available endpoints, improving cluster stability. -
A new log rotation approach has been introduced in the
lokimodule. Now, old logs are automatically removed when disk usage exceeds a threshold: either 95% of PVC size or PVC size minus the size required to store two minutes of log data at the configured ingestion rate (ingestionRateMB). TheretentionPeriodHoursparameter no longer controls the data retention and is used for monitoring alerts only. Iflokibegins removing old logs before the set period is reached, aLokiRetentionPerionViolationalert will be triggered, informing the user that they must reduce the value ofretentionPeriodHoursor increase the PVC size. -
A new
nodeDrainTimeoutSecondparameter lets you set the maximum timeout when attempting to drain a node (in seconds) for each NodeGroup resource. Previously, you could only use the default value (10 minutes) or reduce it to 5 minutes using thequickShutdownparameter, which is now deprecated. -
The
openvpnmodule now includes adefaultClientCertExpirationDaysparameter, allowing you to define the lifetime of client certificates.
Security
Known vulnerabilities have been addressed in the following modules:
ingress-nginx, istio, prometheus, and local-path-provisioner.
Component version updates
The following DKP components have been updated:
cert-manager: 1.17.1dashboard: 1.6.1dex: 2.42.0go-vcloud-director: 2.26.1- Grafana: 10.4.15
- Kubernetes control plane: 1.29.14, 1.30.1, 1.31.6, 1.32.2
kube-state-metrics(monitoring-kubernetes): 2.15.0local-path-provisioner: 0.0.31machine-controller-manager: v0.36.0-flant.19pod-reloader: 1.2.1prometheus: 2.55.1- Terraform providers:
- OpenStack: 1.54.1
- vCD: 3.14.1
Version 1.68
Important
- After the update, the UID will change for all Grafana data sources created using the GrafanaAdditionalDatasource resource. If a data source was referenced by UID, that reference will no longer be valid.
Major changes
-
A new parameter,
iamNodeRole, has been introduced for the AWS provider. It lets you specify the name of the IAM role to bind to all AWS instances of cluster nodes. This can come in handy if you need to grant additional permissions (for example, access to ECR, etc.). -
Creating nodes of the CloudPermanent type now takes less time. Now, CloudPermanent nodes are created in parallel. Previously, they were created in parallel only within a single group.
- Monitoring changes:
- Support for monitoring certificates in secrets of the
Opaquetype has been added. - Support for monitoring images in Amazon ECR has been added.
- A bug that could cause partial loss of metrics when Prometheus instances were restarted has been fixed.
- Support for monitoring certificates in secrets of the
-
When using a multi-cluster Istio configuration or federation, you can now explicitly specify the list of addresses used for inter-cluster requests. Previously, these addresses were determined automatically; however, in some configurations, they could not be resolved.
-
The DexAuthenticator resource now has a
highAvailabilityparameter that controls high availability mode. In high availability mode, multiple replicas of the authenticator are launched. Previously, high availability mode of all authenticators was determined by a global parameter or by theuser-authnmodule. All authenticators deployed by DKP now inherit the high availability mode of the corresponding module. -
Node labels can now be added, removed, or modified using files stored on the node in the
/var/lib/node_labelsdirectory and its subdirectories. The full set of applied labels is stored in thenode.deckhouse.io/last-applied-local-labelsannotation. -
Support for the Huawei Cloud provider has been added.
-
The new
keepDeletedFilesOpenedForparameter in thelog-shippermodule allows you to configure the period to keep the deleted log files open. This way, you can continue reading logs from deleted pods for some time if log storage is temporarily unavailable. -
TLS encryption for log collectors (Elasticsearch, Vector, Loki, Splunk, Logstash, Socket, Kafka) can now be configured using secrets, rather than by storing certificates in the ClusterLogDestination resources. The secret must reside in the
d8-log-shippernamespace and have thelog-shipper.deckhouse.io/watch-secret: truelabel. -
In the project status under the
resourcessection, you can now see which project resources have been installed. Those resources are marked withinstalled: true. - A new parameter,
--tf-resource-management-timeout, has been added to the installer. It controls the resource creation timeout in cloud environments. By default, the timeout is set to 10 minutes. This parameter applies only to the following clouds: AWS, Azure, GCP, OpenStack.
Security
Known vulnerabilities have been addressed in the following modules:
admission-policy-enginechronycloud-provider-azurecloud-provider-gcpcloud-provider-openstackcloud-provider-yandexcloud-provider-zvirtcni-ciliumcontrol-plane-managerextended-monitoringdeschedulerdocumentationingress-nginxistiolokimetallbmonitoring-kubernetesmonitoring-pingnode-manageroperator-trivypod-reloaderprometheusprometheus-metrics-adapterregistrypackagesruntime-audit-engineterraform-manageruser-authnvertical-pod-autoscalerstatic-routing-manager
Component version updates
The following DKP components have been updated:
- Kubernetes Control Plane: 1.29.14, 1.30.10, 1.31.6
aws-node-termination-handler: 1.22.1capcd-controller-manager: 1.3.2cert-manager: 1.16.2chrony: 4.6.1cni-flannel: 0.26.2docker_auth: 1.13.0flannel-cni: 1.6.0-flannel1gatekeeper: 3.18.1jq: 1.7.1kubernetes-cni: 1.6.2kube-state-metrics: 2.14.0vector(log-shipper): 0.44.0prometheus: 2.55.1snapshot-controller: 8.2.0yq4: 3.45.1
Mandatory component restart
The following components will be restarted after updating DKP to 1.68:
- Kubernetes Control Plane
- Ingress controller
- Prometheus, Grafana
admission-policy-enginechronycloud-provider-azurecloud-provider-gcpcloud-provider-openstackcloud-provider-yandexcloud-provider-zvirtcni-ciliumcontrol-plane-managerdeschedulerdocumentationextended-monitoringingress-nginxistiokube-state-metricslog-shipperlokimetallbmonitoring-kubernetesmonitoring-pingnode-manageropenvpnoperator-trivyprometheusprometheus-metrics-adapterpod-reloaderregistrypackagesruntime-audit-engineservice-with-healthchecksstatic-routing-managerterraform-manageruser-authnvertical-pod-autoscaler