Cluster
Scope: Namespaced
Version: v1beta1
Cluster is the Schema for the clusters API.
- apiVersion
APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
- kind
Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
- metadata
- spec
ClusterSpec defines the desired state of Cluster.
- spec.clusterNetwork
Cluster network configuration.
- spec.clusterNetwork.apiServerPort
APIServerPort specifies the port the API Server should bind to. Defaults to 6443.
- spec.clusterNetwork.pods
The network ranges from which Pod networks are allocated.
- spec.clusterNetwork.pods.cidrBlocks
Required value
- spec.clusterNetwork.serviceDomain
Domain name for services.
- spec.clusterNetwork.services
The network ranges from which service VIPs are allocated.
- spec.clusterNetwork.services.cidrBlocks
Required value
- spec.controlPlaneEndpoint
ControlPlaneEndpoint represents the endpoint used to communicate with the control plane.
- spec.controlPlaneEndpoint.host
Required value
The hostname on which the API server is serving.
- spec.controlPlaneEndpoint.port
Required value
The port on which the API server is serving.
- spec.controlPlaneRef
ControlPlaneRef is an optional reference to a provider-specific resource that holds the details for provisioning the Control Plane for a Cluster.
- spec.controlPlaneRef.apiVersion
API version of the referent.
- spec.controlPlaneRef.fieldPath
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: “spec.containers{name}” (where “name” refers to the name of the container that triggered the event) or if no container name is specified “spec.containers[2]” (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
- spec.controlPlaneRef.kind
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
- spec.controlPlaneRef.name
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
- spec.controlPlaneRef.namespace
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
- spec.controlPlaneRef.resourceVersion
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
- spec.controlPlaneRef.uid
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
- spec.infrastructureRef
InfrastructureRef is a reference to a provider-specific resource that holds the details for provisioning infrastructure for a cluster in said provider.
- spec.infrastructureRef.apiVersion
API version of the referent.
- spec.infrastructureRef.fieldPath
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: “spec.containers{name}” (where “name” refers to the name of the container that triggered the event) or if no container name is specified “spec.containers[2]” (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
- spec.infrastructureRef.kind
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
- spec.infrastructureRef.name
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
- spec.infrastructureRef.namespace
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
- spec.infrastructureRef.resourceVersion
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
- spec.infrastructureRef.uid
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
- spec.paused
Paused can be used to prevent controllers from processing the Cluster and all its associated objects.
- spec.topology
This encapsulates the topology for the cluster. NOTE: It is required to enable the ClusterTopology feature gate flag to activate managed topologies support; this feature is highly experimental, and parts of it might still be not implemented.
- spec.topology.class
Required value
The name of the ClusterClass object to create the topology.
- spec.topology.controlPlane
ControlPlane describes the cluster control plane.
- spec.topology.controlPlane.machineHealthCheck
MachineHealthCheck allows to enable, disable and override the MachineHealthCheck configuration in the ClusterClass for this control plane.
- spec.topology.controlPlane.machineHealthCheck.enable
Enable controls if a MachineHealthCheck should be created for the target machines. If false: No MachineHealthCheck will be created. If not set(default): A MachineHealthCheck will be created if it is defined here or in the associated ClusterClass. If no MachineHealthCheck is defined then none will be created. If true: A MachineHealthCheck is guaranteed to be created. Cluster validation will block if
enable
is true and no MachineHealthCheck definition is available. - spec.topology.controlPlane.machineHealthCheck.maxUnhealthy
Any further remediation is only allowed if at most “MaxUnhealthy” machines selected by “selector” are not healthy.
- spec.topology.controlPlane.machineHealthCheck.nodeStartupTimeout
Machines older than this duration without a node will be considered to have failed and will be remediated. If you wish to disable this feature, set the value explicitly to 0.
- spec.topology.controlPlane.machineHealthCheck.remediationTemplate
RemediationTemplate is a reference to a remediation template provided by an infrastructure provider. This field is completely optional, when filled, the MachineHealthCheck controller creates a new object from the template referenced and hands off remediation of the machine to a controller that lives outside of Cluster API.
- spec.topology.controlPlane.machineHealthCheck.remediationTemplate.apiVersion
API version of the referent.
- spec.topology.controlPlane.machineHealthCheck.remediationTemplate.fieldPath
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: “spec.containers{name}” (where “name” refers to the name of the container that triggered the event) or if no container name is specified “spec.containers[2]” (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
- spec.topology.controlPlane.machineHealthCheck.remediationTemplate.kind
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
- spec.topology.controlPlane.machineHealthCheck.remediationTemplate.name
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
- spec.topology.controlPlane.machineHealthCheck.remediationTemplate.namespace
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
- spec.topology.controlPlane.machineHealthCheck.remediationTemplate.resourceVersion
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
- spec.topology.controlPlane.machineHealthCheck.remediationTemplate.uid
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
- spec.topology.controlPlane.machineHealthCheck.unhealthyConditions
UnhealthyConditions contains a list of the conditions that determine whether a node is considered unhealthy. The conditions are combined in a logical OR, i.e. if any of the conditions is met, the node is unhealthy.
UnhealthyCondition represents a Node condition type and value with a timeout specified as a duration. When the named condition has been in the given status for at least the timeout value, a node is considered unhealthy.
- spec.topology.controlPlane.machineHealthCheck.unhealthyConditions.status
Required value
- spec.topology.controlPlane.machineHealthCheck.unhealthyConditions.timeout
Required value
- spec.topology.controlPlane.machineHealthCheck.unhealthyConditions.type
Required value
- spec.topology.controlPlane.machineHealthCheck.unhealthyRange
Any further remediation is only allowed if the number of machines selected by “selector” as not healthy is within the range of “UnhealthyRange”. Takes precedence over MaxUnhealthy. Eg. “[3-5]” - This means that remediation will be allowed only when: (a) there are at least 3 unhealthy machines (and) (b) there are at most 5 unhealthy machines
Pattern:
^\[[0-9]+-[0-9]+\]$
- spec.topology.controlPlane.metadata
Metadata is the metadata applied to the ControlPlane and the Machines of the ControlPlane if the ControlPlaneTemplate referenced by the ClusterClass is machine based. If not, it is applied only to the ControlPlane. At runtime this metadata is merged with the corresponding metadata from the ClusterClass.
- spec.topology.controlPlane.metadata.annotations
Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations
- spec.topology.controlPlane.metadata.labels
Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels
- spec.topology.controlPlane.nodeDeletionTimeout
NodeDeletionTimeout defines how long the controller will attempt to delete the Node that the Machine hosts after the Machine is marked for deletion. A duration of 0 will retry deletion indefinitely. Defaults to 10 seconds.
- spec.topology.controlPlane.nodeDrainTimeout
NodeDrainTimeout is the total amount of time that the controller will spend on draining a node. The default value is 0, meaning that the node can be drained without any time limitations. NOTE: NodeDrainTimeout is different from
kubectl drain --timeout
- spec.topology.controlPlane.nodeVolumeDetachTimeout
NodeVolumeDetachTimeout is the total amount of time that the controller will spend on waiting for all volumes to be detached. The default value is 0, meaning that the volumes can be detached without any time limitations.
- spec.topology.controlPlane.replicas
Replicas is the number of control plane nodes. If the value is nil, the ControlPlane object is created without the number of Replicas and it’s assumed that the control plane controller does not implement support for this field. When specified against a control plane provider that lacks support for this field, this value will be ignored.
- spec.topology.rolloutAfter
RolloutAfter performs a rollout of the entire cluster one component at a time, control plane first and then machine deployments. Deprecated: This field has no function and is going to be removed in the next apiVersion.
- spec.topology.variables
Variables can be used to customize the Cluster through patches. They must comply to the corresponding VariableClasses defined in the ClusterClass.
ClusterVariable can be used to customize the Cluster through patches. Each ClusterVariable is associated with a Variable definition in the ClusterClass
status
variables.- spec.topology.variables.definitionFrom
DefinitionFrom specifies where the definition of this Variable is from. DefinitionFrom is
inline
when the definition is from the ClusterClass.spec.variables
or the name of a patch defined in the ClusterClass.spec.patches
where the patch is external and provides external variables. This field is mandatory if the variable hasDefinitionsConflict: true
in ClusterClassstatus.variables[]
- spec.topology.variables.name
Required value
Name of the variable.
- spec.topology.variables.value
Required value
Value of the variable. Note: the value will be validated against the schema of the corresponding ClusterClassVariable from the ClusterClass. Note: We have to use apiextensionsv1.JSON instead of a custom JSON type, because controller-tools has a hard-coded schema for apiextensionsv1.JSON which cannot be produced by another type via controller-tools, i.e. it is not possible to have no type field. Ref: https://github.com/kubernetes-sigs/controller-tools/blob/d0e03a142d0ecdd5491593e941ee1d6b5d91dba6/pkg/crd/known_types.go#L106-L111
- spec.topology.version
Required value
The Kubernetes version of the cluster.
- spec.topology.workers
Workers encapsulates the different constructs that form the worker nodes for the cluster.
- spec.topology.workers.machineDeployments
MachineDeployments is a list of machine deployments in the cluster.
MachineDeploymentTopology specifies the different parameters for a set of worker nodes in the topology. This set of nodes is managed by a MachineDeployment object whose lifecycle is managed by the Cluster controller.
- spec.topology.workers.machineDeployments.class
Required value
Class is the name of the MachineDeploymentClass used to create the set of worker nodes. This should match one of the deployment classes defined in the ClusterClass object mentioned in the
Cluster.Spec.Class
field. - spec.topology.workers.machineDeployments.failureDomain
FailureDomain is the failure domain the machines will be created in. Must match a key in the FailureDomains map stored on the cluster object.
- spec.topology.workers.machineDeployments.machineHealthCheck
MachineHealthCheck allows to enable, disable and override the MachineHealthCheck configuration in the ClusterClass for this MachineDeployment.
- spec.topology.workers.machineDeployments.machineHealthCheck.enable
Enable controls if a MachineHealthCheck should be created for the target machines. If false: No MachineHealthCheck will be created. If not set(default): A MachineHealthCheck will be created if it is defined here or in the associated ClusterClass. If no MachineHealthCheck is defined then none will be created. If true: A MachineHealthCheck is guaranteed to be created. Cluster validation will block if
enable
is true and no MachineHealthCheck definition is available. - spec.topology.workers.machineDeployments.machineHealthCheck.maxUnhealthy
Any further remediation is only allowed if at most “MaxUnhealthy” machines selected by “selector” are not healthy.
- spec.topology.workers.machineDeployments.machineHealthCheck.nodeStartupTimeout
Machines older than this duration without a node will be considered to have failed and will be remediated. If you wish to disable this feature, set the value explicitly to 0.
- spec.topology.workers.machineDeployments.machineHealthCheck.remediationTemplate
RemediationTemplate is a reference to a remediation template provided by an infrastructure provider. This field is completely optional, when filled, the MachineHealthCheck controller creates a new object from the template referenced and hands off remediation of the machine to a controller that lives outside of Cluster API.
- spec.topology.workers.machineDeployments.machineHealthCheck.remediationTemplate.apiVersion
API version of the referent.
- spec.topology.workers.machineDeployments.machineHealthCheck.remediationTemplate.fieldPath
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: “spec.containers{name}” (where “name” refers to the name of the container that triggered the event) or if no container name is specified “spec.containers[2]” (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
- spec.topology.workers.machineDeployments.machineHealthCheck.remediationTemplate.kind
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
- spec.topology.workers.machineDeployments.machineHealthCheck.remediationTemplate.name
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
- spec.topology.workers.machineDeployments.machineHealthCheck.remediationTemplate.namespace
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
- spec.topology.workers.machineDeployments.machineHealthCheck.remediationTemplate.resourceVersion
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
- spec.topology.workers.machineDeployments.machineHealthCheck.remediationTemplate.uid
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
- spec.topology.workers.machineDeployments.machineHealthCheck.unhealthyConditions
UnhealthyConditions contains a list of the conditions that determine whether a node is considered unhealthy. The conditions are combined in a logical OR, i.e. if any of the conditions is met, the node is unhealthy.
UnhealthyCondition represents a Node condition type and value with a timeout specified as a duration. When the named condition has been in the given status for at least the timeout value, a node is considered unhealthy.
- spec.topology.workers.machineDeployments.machineHealthCheck.unhealthyConditions.status
Required value
- spec.topology.workers.machineDeployments.machineHealthCheck.unhealthyConditions.timeout
Required value
- spec.topology.workers.machineDeployments.machineHealthCheck.unhealthyConditions.type
Required value
- spec.topology.workers.machineDeployments.machineHealthCheck.unhealthyRange
Any further remediation is only allowed if the number of machines selected by “selector” as not healthy is within the range of “UnhealthyRange”. Takes precedence over MaxUnhealthy. Eg. “[3-5]” - This means that remediation will be allowed only when: (a) there are at least 3 unhealthy machines (and) (b) there are at most 5 unhealthy machines
Pattern:
^\[[0-9]+-[0-9]+\]$
- spec.topology.workers.machineDeployments.metadata
Metadata is the metadata applied to the MachineDeployment and the machines of the MachineDeployment. At runtime this metadata is merged with the corresponding metadata from the ClusterClass.
- spec.topology.workers.machineDeployments.metadata.annotations
Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations
- spec.topology.workers.machineDeployments.metadata.labels
Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels
- spec.topology.workers.machineDeployments.minReadySeconds
Minimum number of seconds for which a newly created machine should be ready. Defaults to 0 (machine will be considered available as soon as it is ready)
- spec.topology.workers.machineDeployments.name
Required value
Name is the unique identifier for this MachineDeploymentTopology. The value is used with other unique identifiers to create a MachineDeployment’s Name (e.g. cluster’s name, etc). In case the name is greater than the allowed maximum length, the values are hashed together.
- spec.topology.workers.machineDeployments.nodeDeletionTimeout
NodeDeletionTimeout defines how long the controller will attempt to delete the Node that the Machine hosts after the Machine is marked for deletion. A duration of 0 will retry deletion indefinitely. Defaults to 10 seconds.
- spec.topology.workers.machineDeployments.nodeDrainTimeout
NodeDrainTimeout is the total amount of time that the controller will spend on draining a node. The default value is 0, meaning that the node can be drained without any time limitations. NOTE: NodeDrainTimeout is different from
kubectl drain --timeout
- spec.topology.workers.machineDeployments.nodeVolumeDetachTimeout
NodeVolumeDetachTimeout is the total amount of time that the controller will spend on waiting for all volumes to be detached. The default value is 0, meaning that the volumes can be detached without any time limitations.
- spec.topology.workers.machineDeployments.replicas
Replicas is the number of worker nodes belonging to this set. If the value is nil, the MachineDeployment is created without the number of Replicas (defaulting to 1) and it’s assumed that an external entity (like cluster autoscaler) is responsible for the management of this value.
- spec.topology.workers.machineDeployments.strategy
The deployment strategy to use to replace existing machines with new ones.
- spec.topology.workers.machineDeployments.strategy.rollingUpdate
Rolling update config params. Present only if MachineDeploymentStrategyType = RollingUpdate.
- spec.topology.workers.machineDeployments.strategy.rollingUpdate.deletePolicy
DeletePolicy defines the policy used by the MachineDeployment to identify nodes to delete when downscaling. Valid values are “Random, “Newest”, “Oldest” When no value is supplied, the default DeletePolicy of MachineSet is used
Allowed values:
Random
,Newest
,Oldest
- spec.topology.workers.machineDeployments.strategy.rollingUpdate.maxSurge
The maximum number of machines that can be scheduled above the desired number of machines. Value can be an absolute number (ex: 5) or a percentage of desired machines (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 1. Example: when this is set to 30%, the new MachineSet can be scaled up immediately when the rolling update starts, such that the total number of old and new machines do not exceed 130% of desired machines. Once old machines have been killed, new MachineSet can be scaled up further, ensuring that total number of machines running at any time during the update is at most 130% of desired machines.
- integer or string
The maximum number of machines that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired machines (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. Defaults to 0. Example: when this is set to 30%, the old MachineSet can be scaled down to 70% of desired machines immediately when the rolling update starts. Once new machines are ready, old MachineSet can be scaled down further, followed by scaling up the new MachineSet, ensuring that the total number of machines available at all times during the update is at least 70% of desired machines.
- spec.topology.workers.machineDeployments.strategy.type
Type of deployment. Default is RollingUpdate.
Allowed values:
RollingUpdate
,OnDelete
- spec.topology.workers.machineDeployments.variables
Variables can be used to customize the MachineDeployment through patches.
- spec.topology.workers.machineDeployments.variables.overrides
Overrides can be used to override Cluster level variables.
ClusterVariable can be used to customize the Cluster through patches. Each ClusterVariable is associated with a Variable definition in the ClusterClass
status
variables.- spec.topology.workers.machineDeployments.variables.overrides.definitionFrom
DefinitionFrom specifies where the definition of this Variable is from. DefinitionFrom is
inline
when the definition is from the ClusterClass.spec.variables
or the name of a patch defined in the ClusterClass.spec.patches
where the patch is external and provides external variables. This field is mandatory if the variable hasDefinitionsConflict: true
in ClusterClassstatus.variables[]
- spec.topology.workers.machineDeployments.variables.overrides.name
Required value
Name of the variable.
- spec.topology.workers.machineDeployments.variables.overrides.value
Required value
Value of the variable. Note: the value will be validated against the schema of the corresponding ClusterClassVariable from the ClusterClass. Note: We have to use apiextensionsv1.JSON instead of a custom JSON type, because controller-tools has a hard-coded schema for apiextensionsv1.JSON which cannot be produced by another type via controller-tools, i.e. it is not possible to have no type field. Ref: https://github.com/kubernetes-sigs/controller-tools/blob/d0e03a142d0ecdd5491593e941ee1d6b5d91dba6/pkg/crd/known_types.go#L106-L111