The module lifecycle stage: Preview
The module has requirements for installation
Kafka
Kafka is the primary resource users work with. It is a namespaced resource that lets you create and manage Kafka broker instances in Deckhouse Kubernetes Platform and bind them to a specific KafkaClass.
The class defines the allowed sizing options, which configuration parameters can be overridden, and the validation rules that apply.
A Kafka resource has two main sections in its spec:
instance— compute and storage requirements for the broker pod (CPU, memory, persistent volume). Must satisfy the sizing policies of the referenced class.configuration— broker-level settings you want to tune. Only parameters listed in the classoverridableConfigurationcan be set here; all other settings are governed by the class defaults.
KafkaClassName
The name of the KafkaClass to associate with this instance.
Without an existing KafkaClass, deploying the service is impossible.
spec:
kafkaClassName: defaultInstance
Specifies the compute and storage resources for the broker pod.
Must pass validation according to the sizingPolicy of the corresponding class:
spec:
instance:
memory:
size: 1Gi
cpu:
cores: 1
coreFraction: "50%"
persistentVolumeClaim:
size: 10Gi
storageClassName: defaultConfiguration
Optional broker settings to override the class defaults.
Only parameters listed in the class overridableConfiguration are accepted here. All values are validated against the class validations before being applied.
spec:
configuration:
logRetentionHours: 168
logRetentionBytes: -1
messageMaxBytes: "1Mi"
compressionType: producer
autoCreateTopicsEnable: falseConfiguration Fields
logRetentionHours
Type: integer | Kafka param: log.retention.hours | Min: 1 | Example: 168
Number of hours to keep a log file before deleting it.
Mutually exclusive with logRetentionMs — set only one. When both are set, logRetentionMs takes priority.
logRetentionMs
Type: integer | Kafka param: log.retention.ms | Min: -1 | Example: 604800000
Millisecond-precision log retention period. Takes priority over logRetentionHours and is applied dynamically without a broker restart.
Set to -1 to disable time-based retention entirely. Mutually exclusive with logRetentionHours.
logRetentionBytes
Type: int-or-string | Kafka param: log.retention.bytes | Example: 1Gi
Maximum total size of the log per partition before old segments are deleted.
Use -1 to disable size-based retention.
logSegmentBytes
Type: int-or-string | Kafka param: log.segment.bytes | Example: 512Mi
Maximum size of a single log segment file. Once reached, Kafka opens a new segment.
A message must fit within a single segment (messageMaxBytes ≤ logSegmentBytes).
messageMaxBytes
Type: int-or-string | Kafka param: message.max.bytes | Example: 1Mi
Maximum size of a single message the broker will accept from producers.
Must not exceed logSegmentBytes. Must not exceed socketRequestMaxBytes defined in the class.
compressionType
Type: string | Kafka param: compression.type
Allowed values: producer, uncompressed, gzip, snappy, lz4, zstd
Broker-level compression applied to topic messages.
producer (recommended) retains whatever compression the producer used, avoiding re-compression CPU cost.
autoCreateTopicsEnable
Type: boolean | Kafka param: auto.create.topics.enable
Controls whether topics are automatically created when first accessed by a producer or consumer.
Disable in production to keep explicit control over topic lifecycle.
logCleanupPolicy
Type: string | Kafka param: log.cleanup.policy
Allowed values: delete, compact, delete,compact
Cleanup policy applied to log segments when retention limits are exceeded.
delete removes old segments; compact retains only the latest value per key; delete,compact applies both.
Note: Only fields listed in
overridableConfigurationof the associatedKafkaClasscan be set here. Attempting to set a field not in that list will fail validation.
Supported Kafka Versions
The only supported Kafka version is 3.9.0.
Our images for running Kafka containers are based on distroless architecture.
Status
The status of the Managed Kafka service is reflected in the Kafka resource.
The Conditions structure clearly shows the current status of the service.
Significant types:
LastValidConfigurationApplied— An aggregating type that shows whether the last valid configuration has been successfully applied at least once.ConfigurationValid— shows whether the configuration has passed all validations of the associatedKafkaClass.ScaledToLastValidConfiguration— shows whether the number of running replicas matches the specified configuration.Available— shows whether the broker is running and accepting connections.
conditions:
- lastTransitionTime: '2025-09-22T23:20:36Z'
observedGeneration: 2
status: 'True'
type: Available
- lastTransitionTime: '2025-09-22T14:38:04Z'
observedGeneration: 2
status: 'True'
type: ConfigurationValid
- lastTransitionTime: '2025-09-22T14:38:47Z'
observedGeneration: 2
status: 'True'
type: LastValidConfigurationApplied
- lastTransitionTime: '2025-09-22T23:20:36Z'
observedGeneration: 2
status: 'True'
type: ScaledToLastValidConfigurationA False status indicates a problem at one stage or another, or incomplete state synchronization.
For such a state, a reason and message with a description will be specified.
---
- lastTransitionTime: '2025-09-23T14:53:33Z'
message: Syncing
observedGeneration: 1
reason: Syncing
status: 'False'
type: LastValidConfigurationApplied
- lastTransitionTime: '2025-09-23T14:54:58Z'
message: Not all the instances are running still waiting for 1 to become ready
observedGeneration: 1
reason: ScalingInProgress
status: 'False'
type: ScaledToLastValidConfiguration
---Usage Examples
Basic Usage
Standard Kafka broker with persistent storage and class default configuration.
- Create a namespace named
kafka. - Create a
Kafkaresource:
kubectl apply -f managed-services_v1alpha1_kafka.yaml -n kafkaapiVersion: managed-services.deckhouse.io/v1alpha1
kind: Kafka
metadata:
name: kafka-sample
spec:
kafkaClassName: default
instance:
memory:
size: "1Gi"
cpu:
cores: 2
coreFraction: "50%"
persistentVolumeClaim:
size: "10Gi"- Wait until the broker is ready and all conditions are
True:
kubectl get kafka kafka-sample -n kafka -o wide -w- Connect to the broker using the
d8ms-kfk-kafka-sampleservice on port9092:
--bootstrap-server d8ms-kfk-kafka-sample:9092Short-Term Retention
Kafka broker with reduced log retention — suitable for development or testing environments.
- Create a namespace named
kafka. - Create a
Kafkaresource:
kubectl apply -f managed-services_v1alpha1_kafka.yaml -n kafkaapiVersion: managed-services.deckhouse.io/v1alpha1
kind: Kafka
metadata:
name: kafka-dev
spec:
kafkaClassName: default
configuration:
logRetentionHours: 24
autoCreateTopicsEnable: true
instance:
memory:
size: "1Gi"
cpu:
cores: 2
coreFraction: "25%"
persistentVolumeClaim:
size: "5Gi"- Wait until the broker is ready and all conditions are
True:
kubectl get kafka kafka-dev -n kafka -o wide -w- Connect to the broker using the
d8ms-kfk-kafka-devservice on port9092:
--bootstrap-server d8ms-kfk-kafka-dev:9092Custom Compression and Message Size
Kafka broker configured with gzip compression and an increased maximum message size.
- Create a namespace named
kafka. - Create a
Kafkaresource:
kubectl apply -f managed-services_v1alpha1_kafka.yaml -n kafkaapiVersion: managed-services.deckhouse.io/v1alpha1
kind: Kafka
metadata:
name: kafka-compressed
spec:
kafkaClassName: default
configuration:
compressionType: gzip
messageMaxBytes: "10Mi"
autoCreateTopicsEnable: false
instance:
memory:
size: "2Gi"
cpu:
cores: 2
coreFraction: "50%"
persistentVolumeClaim:
size: "20Gi"- Wait until the broker is ready and all conditions are
True:
kubectl get kafka kafka-compressed -n kafka -o wide -w- Connect to the broker using the
d8ms-kfk-kafka-compressedservice on port9092:
--bootstrap-server d8ms-kfk-kafka-compressed:9092