The module lifecycle stage: Preview
Managed Memcached Operator - User Manual
This guide is intended for Deckhouse Kubernetes Platform cluster users who want to deploy and use Memcached for their applications.
The primary tool for users is the Memcached resource. This Custom Resource allows you to create and manage Memcached instances in your namespaces. All instances are created based on classes (MemcachedClass) defined by the cluster administrator. Classes specify available configurations, resource constraints, and placement rules.
API Version: The module currently uses v1alpha2 API version. The previous v1alpha1 version is still supported through automatic conversion webhooks. Existing resources created with v1alpha1 will continue to work without any changes, and the operator will automatically convert them when needed. For new resources, we recommend using v1alpha2.
Table of Contents
Quick Start
After the operator is installed, you can immediately start creating Memcached instances. The module comes with a ready-to-use default class named default that includes sensible configuration defaults and validation rules. This class is production-ready and suitable for most common use cases.
You can use this default class right away or ask your cluster administrator to create custom classes for specific requirements.
1. Check Available Classes
First, check what MemcachedClasses are available (created by your cluster administrator):
# List available classes
kubectl get memcachedclass
# View class details
kubectl get memcachedclass default -o yaml
The class defines resource limits, allowed configurations, and validation rules. Your cluster administrator can create additional classes for different environments (dev, staging, production).
Important: If you specify a class name that doesn’t exist in the cluster, your Memcached resource will be created, but the service will not be deployed until the class appears. Once your administrator creates the class, all validations will run automatically, and any errors will appear in status.conditions. You can check them with:
kubectl describe memcached <name>
kubectl get memcached <name> -o yaml | grep -A 10 conditions
2. Deploy a Memcached Instance
Create a simple standalone Memcached instance using the default class:
apiVersion: managed-services.deckhouse.io/v1alpha2
kind: Memcached
metadata:
name: my-memcached
namespace: default
spec:
memcachedClassName: default # Use the default class
type: Standalone
instance:
memory:
size: "256Mi"
cpu:
cores: 1
coreFraction: "5%"
Apply the configuration:
kubectl apply -f memcached.yaml
3. Check Status
Monitor the deployment status:
# Check the Memcached instance
kubectl get memcached my-memcached
# Get detailed status
kubectl describe memcached my-memcached
# View complete configuration
kubectl get memcached my-memcached -o yaml
Wait for the Available condition to become True.
4. Get Connection String
The operator creates a Headless Service for your application to connect to Memcached. The DNS name format is: d8ms-mc-<name>-<index>.<namespace>.svc.cluster.local
For Standalone instances:
d8ms-mc-my-memcached-0.default.svc.cluster.local:11211
For Group instances (e.g., 3 replicas):
d8ms-mc-my-memcached-0.default.svc.cluster.local:11211
d8ms-mc-my-memcached-1.default.svc.cluster.local:11211
d8ms-mc-my-memcached-2.default.svc.cluster.local:11211
Your application should connect to all instances for distributed caching.
Instance Configuration
Basic Configuration
Every Memcached instance requires the following basic configuration:
apiVersion: managed-services.deckhouse.io/v1alpha2
kind: Memcached
metadata:
name: my-memcached
namespace: default
spec:
memcachedClassName: default # Required: reference to MemcachedClass
type: Standalone # Required: Standalone or Group
instance: # Required: resource allocation
memory:
size: "1Gi"
cpu:
cores: 2
coreFraction: "10%"
Key Fields:
memcachedClassName: Name of the MemcachedClass to use (ask your cluster administrator)type: Deployment type -Standalone(single instance) orGroup(multiple instances)instance: Resource allocation (must match the class’s sizing policies)
Standalone vs Group Deployment
Standalone Deployment
Single instance, suitable for development or non-critical workloads:
spec:
type: Standalone
instance:
memory:
size: "512Mi"
cpu:
cores: 1
coreFraction: "5%"
Group Deployment
Multiple instances for high availability and load distribution:
spec:
type: Group
group:
size: 3 # Number of instances
topology: TransZonal # Zonal, TransZonal, or Ignored
instance:
memory:
size: "1Gi"
cpu:
cores: 2
coreFraction: "10%"
Topology Options:
Zonal: All instances in the same availability zoneTransZonal: Instances distributed across different zonesIgnored: Let Kubernetes scheduler decide
Note: Available topologies are defined by the MemcachedClass. Check with your cluster administrator.
Resource Configuration
Memory and CPU
Resources must fall within the ranges defined by the class’s sizing policies:
instance:
memory:
size: "512Mi" # Must be within class limits and match step if defined
cpu:
cores: 2 # Must be within class limits
coreFraction: "10%" # Must be in the allowed list if defined
Common coreFraction values:
"5%": 5% of CPU core (minimal load)"10%": 10% of CPU core (light load)"25%": 25% of CPU core (moderate load)"50%": 50% of CPU core (high load)"100%": 100% of CPU core (maximum performance)
If your configuration doesn’t match the class policies, you’ll get a validation error:
haven't found matching size policy in class
Contact your administrator to adjust the class or choose valid values.
Custom Configuration
You can override certain Memcached configuration parameters if the class allows it (check overridableConfiguration in the class):
spec:
configuration:
maxItemSize: "2Mi" # Maximum size of a cached item
slabMinSize: "Medium" # Slab page size: Short, Medium, or Long
lockMemory: false # Lock memory to prevent swapping
Configuration Parameters:
| Parameter | Description | Values | Typical Use |
|---|---|---|---|
maxItemSize |
Maximum size of a single cached item | e.g., “512k”, “1Mi”, “4Mi” | Set based on your largest objects |
slabMinSize |
Minimum slab page size | Short (50 bytes) Medium (100 bytes) Long (200 bytes) |
Match your typical object sizes |
lockMemory |
Use mlockall() to lock memory | true, false | Enable for production to prevent swapping |
Note: Only parameters listed in the class’s overridableConfiguration can be changed. Attempting to override other parameters will result in a validation error.
Examples
Example 1: Development Instance
Small standalone instance for development:
apiVersion: managed-services.deckhouse.io/v1alpha2
kind: Memcached
metadata:
name: dev-cache
namespace: development
spec:
memcachedClassName: default
type: Standalone
instance:
memory:
size: "256Mi"
cpu:
cores: 1
coreFraction: "5%"
Example 2: Production Group
High-availability group with custom configuration:
apiVersion: managed-services.deckhouse.io/v1alpha2
kind: Memcached
metadata:
name: prod-cache
namespace: production
spec:
memcachedClassName: production # Use production-specific class
type: Group
group:
size: 3
topology: TransZonal
instance:
memory:
size: "2Gi"
cpu:
cores: 2
coreFraction: "5%"0
configuration:
maxItemSize: "4Mi"
slabMinSize: "Medium"
Example 3: Large Standalone Instance
Single instance with more resources:
apiVersion: managed-services.deckhouse.io/v1alpha2
kind: Memcached
metadata:
name: large-cache
namespace: default
spec:
memcachedClassName: default
type: Standalone
instance:
memory:
size: "4Gi"
cpu:
cores: 4
coreFraction: "25%"
configuration:
maxItemSize: "8Mi"
For more examples and use cases, see the Examples document.
Troubleshooting
Common Issues
1. MemcachedClass Not Found
Error: MemcachedClass <name> not found
Solution: Check available classes and use a valid name:
kubectl get memcachedclass
Contact your cluster administrator if you need a specific class.
2. Configuration Validation Failed
Error: Configuration validation failed: <details>
Solution: Check the validation error message in the status:
kubectl describe memcached <name>
kubectl get memcached <name> -o yaml | grep -A 10 conditions
Common validation errors:
- Configuration parameter not in overridableConfiguration
- CEL validation rule failed (e.g., maxItemSize too large)
- Invalid value for a parameter
3. Sizing Policy Mismatch
Error: haven't found matching size policy in class
Solution: Your CPU/memory configuration doesn’t match any sizing policy in the class:
# Check class sizing policies
kubectl get memcachedclass <class-name> -o yaml | grep -A 20 sizingPolicies
Adjust your configuration to match one of the policies or contact your administrator.
4. Topology Not Allowed
Error: topology <type> not allowed by class
Solution: The requested topology isn’t in the class’s allowed topologies:
# Check allowed topologies
kubectl get memcachedclass <class-name> -o jsonpath='{.spec.topology.allowedTopologies}'
Choose an allowed topology or omit the topology field to use the default.
Status Conditions
Monitor these conditions in status.conditions:
kubectl get memcached <name> -o jsonpath='{.status.conditions[*]}' | jq
Conditions:
| Condition | Description | Meaning |
|---|---|---|
ConfigurationValid |
Configuration validation status | True: configuration is validFalse: validation failed |
Available |
Service availability | True: service is available (>50% pods ready)False: service unavailable |
LastValidConfigurationApplied |
Last valid configuration applied | True: configuration successfully applied to all instances |
ScaledToLastValidConfiguration |
Scaling status | True: all instances running with current configuration |
Debugging Commands
# Get instance status
kubectl get memcached <name>
kubectl describe memcached <name>
Best Practices
Resource Planning
-
Memory Allocation
- Allocate 70-80% of instance memory to Memcached data
- Account for connection overhead and metadata
- Use memory monitoring to right-size instances
-
CPU Allocation
- Start with lower coreFraction values and increase if needed
- Monitor CPU usage to optimize coreFraction
- Use higher coreFraction for write-heavy workloads
-
Instance Sizing
- Start small and scale up based on monitoring
- Multiple smaller instances (Group) often better than one large instance
- Consider network bandwidth in your sizing
Configuration
-
maxItemSize
- Set based on your largest cached objects
- Common values: 1Mi for small objects, 4Mi for medium, 8Mi+ for large
- Remember: must be smaller than half of instance memory
-
slabMinSize
Short(50 bytes): Most objects < 50 bytesMedium(100 bytes): Most objects < 100 bytesLong(200 bytes): Larger objects- Choose based on your largest key used
-
lockMemory
true: Recommended for production (prevents swapping)false: Acceptable for development/testing- Requires OS configuration
High Availability
-
Use Group Type
- Always use
Groupfor production workloads - Minimum 3 instances for fault tolerance
- Always use
-
Topology Selection
TransZonal: Best for production (distributes across zones)Zonal: Use when all traffic is in one zoneIgnored: Development/testing only
-
Client Configuration
- Configure clients to connect to all instances
- Implement distributed caching in your application
- Handle instance failures gracefully
-
Monitoring
- Monitor hit/miss ratios
- Track memory usage and evictions
- Set up alerts for pod restarts
- Monitor response times
For more advanced topics and administrator information, see the Administrator Guide.