The module lifecycle stageGeneral Availability
The module has requirements for installation

Security

How we secure communication between components?

We use TLS encryption for all internal communications between GitLab services within the Kubernetes cluster. We also ensure that any external access is also secured with HTTPS or other secure protocol.

How we secure metrics scraping?

We secure metrics scraping by using kube-rbac-proxy sidecar container alongside Kubernetes RBAC. Kube-rbac-proxy acts as an authentication and authorization layer, ensuring that only requests with valid permissions can access the metrics endpoints.

What TLS encryption is supported?

  • TLS 1.2 or higher is required for all incoming and outgoing TLS connections.
  • TLS certificates must have at least 112 bits of security. RSA, DSA, and DH keys shorter than 2048 bits, and ECC keys shorter than 224 bits are considered insecure and prohibited.

How to pass secret to CodeInstance property?

Some properties support following template secret/<secret-name>/<key-name> to fetch secret values for work. Operator will automatically find and prepare data from Secret resource located in d8-code namespace.

Why service so privileged?

Gitaly uses cgroup mechanism to control resources consumption during operations over git repositories. To be able to work in Kubernetes, we should allow pod to write to its cgroup on host path /sys/fs/cgroup

Capabilities:

  • SETUID , SETGID, CHOWN - Used by init container to set user git:git in pod cgroup
  • FOWNER - to bypass checks when git data directory has owner mismatch
  • SYS_ADMIN - required for mounting cgroupfs and working with cgroup hierarchies
  • SYS_RESOURCE - required to work with cgroup resources limits (CPU, Memory, IO)
  • SYS_PTRACE - to make syscalls like process_vm_readv / process_vm_writev
  • KILL - required for terminating or signaling processes in managed cgroups

How to refresh Gitaly replica?

Cases:

  • Refill data on Gitaly node after PV recreation
  • Update manually node when data is out-of-date

To refresh specific Gitaly node run:

kubectl exec -i -t -n d8-code praefect-0 -c praefect -- praefect -config /etc/gitaly/config.toml verify --virtual-storage <virtual_storage> --storage <gitaly_pod_name>

All repositories data on <gitaly_pod_name> will be marked as unverified to prioritize reverification. Reverification runs asynchronously in the background.

Repository exists but its files are not found

If a repository is visible in the Code UI but operations on it fail with file access errors, follow the steps below to diagnose the issue.

  1. Make sure the praefect and gitaly pods are running and in Ready state:

    kubectl -n d8-code get pods -l app.kubernetes.io/component=praefect
    kubectl -n d8-code get pods -l app.kubernetes.io/component=gitaly
  2. Check the Praefect database state and Gitaly node availability:

    kubectl -n d8-code exec -it sts/praefect -- praefect --config /etc/gitaly/config.toml check

    The output should contain no errors. If errors are present — contact support.

  3. Check whether the repository metadata exists in the Praefect database. The project ID can be found in the Code UI under Settings > General:

    kubectl -n d8-code exec -it sts/praefect -- praefect --config /etc/gitaly/config.toml metadata --repository-id <project-id>

    Example of a healthy output:

    Repository ID: 2238
    Virtual Storage: "default"
    Relative Path: "@hashed/45/1b/451b...git"
    Replica Path: "@cluster/repositories/7a/98/2238"
    Primary: "gitaly-default-0"
    Generation: 0
    Replicas:
    - Storage: "gitaly-default-0"
      Assigned: true
      Generation: 0, fully up to date
      Healthy: true
      Valid Primary: true
      Verified At: 2026-03-16 13:43:34 +0000 UTC

    How to interpret the result:

    • The Replicas section contains an entry with Healthy: true — Praefect metadata is intact. Praefect knows which Gitaly node holds the repository and considers it healthy. In this case the problem is on the Gitaly side: repository files are missing from disk — deleted or lost during migration. Restore the repository from a backup.

    • The output is empty or the Replicas section is missing — the Praefect database has no record of this repository. This can be caused by manual changes to the database, an error during migration, or running the migration against a non-empty Praefect database (data already existed in the database before the migration started — this must be ruled out during migration preparation). In all these cases, restore the Praefect database from a backup.

Network configuration

How to use ownLoadBalancer with reserved public IPV4 address in Yandex Cloud

To assign a reserved public IPv4 address to your ownLoadBalancer, simply specify the following label when creating or updating your resource. Example:

network:
    ownLoadBalancer:
      annotations:
        yandex.cpi.flant.com/listener-address-ipv4: xx.xx.xx.xx
      enabled: true

Important: The specified IPv4 address must be pre-reserved in your Yandex Cloud project via the Console or CLI.

How to set LoadBalancer class for service when ownLoadBalancer is used

If your cluster setup doesn’t have default LoadBalancer class, you can use ownLoadBalancer.loadBalancerClass option to set needed

Example:

network:
    ownLoadBalancer:
      enabled: true
      loadBalancerClass: my-lb-class

Important: field loadBalancerClass is immutable. In order to change it you would need to re-create CR

How to configure web UI with custom certificate

Create a TLS Secret in d8-code and reference it in spec.network.web.https:

spec:
  network:
    certificates:
      customCAs:
        - secret: code-web-tls
          keys:
            - tls.crt
    web:
      hostname: code.example.com
      https:
        mode: CustomCertificate
        customCertificate:
          secretName: code-web-tls

For detailed step-by-step instructions and verification checklist, see Network documentation.


Migration from/to Omnibus

Can not open Gitlab Web IDE

If you encounter error Could not find a callback URL, you should update its auth URL in Admin > Applications to actual one.

Components

How to enable or disable the toolbox component

toolbox is an optional component and is enabled by default.

  • To explicitly enable toolbox:
spec:
  features:
    toolbox:
      enabled: true # default
  • To disable toolbox:
spec:
  features:
    toolbox:
      enabled: false

If you use procedures that rely on the Toolbox Pod (for example, Rails console or some backup/restore workflows), make sure toolbox is enabled.

How to disable Code Operator?

To pause resource management by the operator without disabling the module entirely, set spec.maintenance to NoResourceReconciliation in the ModuleConfig:

apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
  name: code
spec:
  enabled: true
  version: 1
  maintenance: NoResourceReconciliation

After applying, scale down the operator pod:

kubectl -n d8-code scale --replicas=0 deploy/code-operator

Before applying this setting, make sure you understand its implications — see the ModuleConfig documentation.

How to restore the operator after disabling it?

  1. Remove spec.maintenance from ModuleConfig:

    apiVersion: deckhouse.io/v1alpha1
    kind: ModuleConfig
    metadata:
      name: code
    spec:
      enabled: true
      version: 1
  2. Scale the operator pod back up:

    kubectl -n d8-code scale --replicas=1 deploy/code-operator
  3. Wait for the operator pod to reach Running state and resume resource management.

Backup S3 bucket grows fast

Use property backup.keepLast for backup retentions. See more.


Migrate to registry with metadata database(v2)

Migration steps


Module deletion

You can fully cleanup cluster from module in 2 steps:

  • disable it following the steps as for any other module in Deckhouse Kubernetes Platform
  • annotate moduleConfig with modules.deckhouse.io/allow-disable: "true" to bypass deckhouse-controller errors
  • change enable flag in moduleConfig from true to false
  • delete namespace as it may have some secrets/configmaps left: kubectl delete ns d8-code

Please keep in mind to save secrets/rails-secret prior to deletion, otherwise you would we unable to fully restore from existing backups in future


Does Code support incremental backups?

Code does not support full incremental backups.