The module lifecycle stageGeneral Availability

Migration from Omnibus to Code

Notice: Before step 11 operator can be in failure state

What backup archive does not include

  • Omnibus backup archives do not include /etc/gitlab/gitlab.rb and /etc/gitlab/gitlab-secrets.json.
  • Omnibus backup archives do not include SSH and TLS host keys.
  • For Omnibus, object storage data is not included in the archive automatically.

Back up and migrate these data types separately (steps below already include this flow for object storage, secrets, and SSH keys).

  1. If GitLab uses local file storage for Docker Registry, you need to migrate the data to an S3 bucket that connects to Code, according to this guide:

    aws --endpoint-url <https://your-object-storage-backend.com> s3 sync registry s3://mybucket
    • If you are using the s3cmd utility, run the command s3cmd --configure to set up the connection configuration for S3. Then, use the following command to transfer data to the S3 bucket that is connected to Code:
      s3cmd sync registry s3://mybucket
      where registry is the path to the Docker Registry storage folder in the file system.
  2. Blob objects that use S3 Object Storage for data storage should be migrated to S3 buckets connected to the Code Operator, according to this guide:

    • Configure two connections in rclone: old and new, where:
      • old is the connection to the S3 Storage linked to GitLab.
      • new is the connection to the S3 Storage bucket in the Code Operator.
    • Run the command:
      rclone sync -P old:BUCKET_NAME new:BUCKET_NAME
  3. Create a backup archive according to this guide. If objects were migrated in step 2, add the SKIP option for those components. If Object Storage is not used, SKIP can be omitted.

    Example:

    sudo gitlab-backup create
  4. Upload the created backup archive to the administrator’s computer, then upload it to the S3 storage for backups specified in the CodeInstance for Code, under the bucketName field:

    backup:
      enabled: true
      s3:
        bucketName: d8-code-backup
        tmpBucketName: d8-code-backup-tmp

    The S3 bucket d8-code-backup should contain the backup archive, for example: 1742909494_2025_03_25_17.8.1_gitlab_backup.tar. tmpBucketName is used as temporary storage during backup/restore operations and should be configured as a separate bucket with short retention policy.

  5. Create secrets in the d8-code namespace:

    Place the corresponding values into these secrets.

  6. Generate the CodeInstance manifest and configure the backup block:

    backup:
      enabled: true
      s3:
        bucketName: code-backup
        tmpBucketName: code-backup-tmp
        external:
          accessKey: S3AccessKey
          provider: YCloud | Generic | AWS | AzureRM
          secretKey: S3SecretKey
        mode: External

Restore preconditions

  • Source backup and target GitLab deployment must use the same GitLab version.
  • toolbox must be enabled and running.
  • Target DB clients must be stopped before restore (next step scales workloads to zero).
  • Backup archive name must stay in <backup_id>_gitlab_backup.tar format.
  • If backup was created in Omnibus, you may see errors about creating/using an old PostgreSQL role from backup and/or creating PostgreSQL extensions. These messages are usually non-critical and do not break restore or post-restore operation.

Backup/restore command matrix

  • Create backup from Toolbox: backup-utility (optionally with --skip ... selected manually).
  • Restore by backup ID from backup bucket: backup-utility --restore -t <backup_id>.
  • Restore by URL or local file path: backup-utility --restore -f <URL|file:///path/to/<backup_id>_gitlab_backup.tar>.
  1. Scale all deployments down to 0:

    kubectl -n d8-code scale --replicas=0 deploy/sidekiq-default
    kubectl -n d8-code scale --replicas=0 deploy/webservice-default
  2. Connect to the toolbox pod on the master host of the Deckhouse cluster. Make sure toolbox is enabled. Example:

    /opt/deckhouse/bin/kubectl -n d8-code exec -it -c toolbox toolbox-64d4fb84cf-b7dwb -- bash
  3. Inside the toolbox, run:

    backup-utility --restore -t <backup_timestamp>

    For example:

    backup-utility --restore -t 1742909494_2025_03_25_17.8.1
  4. To apply migrations related to Code itself, run:

cd /srv/gitlab
gitlab-rake db:migrate
  1. Create service account for operator Run on your workstation:
SERVICE_ACCOUNT_PAT_TOKEN=$(kubectl -n d8-code get secret code-service-account -o jsonpath='{ .data.api-token }'  | base64 -d)

Inside toolbox, run:

cd /srv/gitlab
export SERVICE_ACCOUNT_PAT_TOKEN=<token>
gitlab-rake gitlab:generate:service_account_with_token
  1. Scale all deployments up:
kubectl -n d8-code scale --replicas=1 deploy/sidekiq-default
kubectl -n d8-code scale --replicas=1 deploy/webservice-default
  1. Wait for the operator to restart the pods.

  2. Verify restore and migrations are completed successfully:

kubectl -n d8-code get pods
kubectl -n d8-code exec -it -c toolbox deploy/toolbox -- gitlab-rake db:migrate:status
kubectl -n d8-code exec -it -c toolbox deploy/toolbox -- gitlab-rake gitlab:check SANITIZE=true

Restore is considered successful when all pods are Ready, there are no pending migrations, and gitlab:check finishes without critical errors.


Migration from Code to Omnibus

  1. Scale all deployments down to 0:

    kubectl -n d8-code scale --replicas=0 deploy/sidekiq-default
    kubectl -n d8-code scale --replicas=0 deploy/webservice-default
  2. Rollback Code migrations from the database. Enter the toolbox pod and run the following commands from the /srv/gitlab directory.

    Note: toolbox is an optional component (enabled by default). If the toolbox Pod is missing, make sure it is enabled in CodeInstance (spec.features.toolbox.enabled: true) and wait for reconciliation.

    bundle exec rake fe:db:migrations:rollback
  3. Use the toolbox pod and the built-in backup-utility tool to create a backup archive:

    kubectl -n d8-code exec -it -c toolbox deploy/toolbox -- backup-utility

    If backup is triggered manually, select --skip arguments yourself when some data classes are already migrated or handled outside the archive.

  4. Wait for the backup creation process to complete. Upon completion, the backup will be saved in the S3 storage specified in the CodeInstance under the backup section:

    backup:
      enabled: true
      s3:
        bucketName: code-backup
        tmpBucketName: code-backup-tmp
        external:
          accessKey: S3AccessKey
          provider: YCloud | Generic | AWS | AzureRM
          secretKey: S3SecretKey
        mode: External

    The name of the created archive will follow the format <timestamp>_gitlab_backup.tar. Example:

    1742909494_2025_03_25_17.8.1_gitlab_backup.tar.

  5. Stop the puma and sidekiq services:

    gitlab-ctl stop puma
    gitlab-ctl stop sidekiq

    If GitLab is deployed in Docker, run these commands inside the container.

  6. Download and transfer the backup archive to the directory where GitLab backups are stored:

    • By default, this is /var/opt/gitlab/backups.
    • If a different directory is used, move the backup there.
    • If S3 Object Storage is used for storing backups, upload the archive there.
  7. Restore the gitlab-secrets.json file:

    • Create a copy of the current gitlab-secrets.json file from your Linux/Docker installation. To do this, copy the file /etc/gitlab/gitlab-secrets.json to the administrator’s host.

    • Retrieve rails-secrets.json from Code. Run the following command on the host where the kubectl command is executed for the Deckhouse cluster:

      kubectl -n d8-code get secret rails-secret-v1 -ojsonpath='{.data.secrets\.yml}' | yq '@base64d | from_yaml | .production' -o json > rails-secrets.json
    • Move the created rails-secrets.json file to the same directory as the gitlab-secrets.json file, and run the command:

      yq eval-all 'select(filename == "gitlab-secrets.json").gitlab_rails = select(filename == "rails-secrets.json") | select(filename == "gitlab-secrets.json")' -ojson gitlab-secrets.json rails-secrets.json > gitlab-secrets-updated.json
    • Replace the current gitlab-secrets.json file with the generated gitlab-secrets-updated.json. To do this, copy gitlab-secrets-updated.json to the host/container where GitLab is running, into the /etc/gitlab/gitlab-secrets.json directory:

      cp gitlab-secrets-updated.json /etc/gitlab/gitlab-secrets.json
  8. Restore the SSH host keys:

    • On the host where the kubectl command is executed for the Deckhouse cluster, run the following commands:

      kubectl -n d8-code get secrets shell-host-keys -ojsonpath='{.data.ssh_host_ecdsa_key}' | base64 -d > ssh_host_ecdsa_key
      kubectl -n d8-code get secrets shell-host-keys -ojsonpath='{.data.ssh_host_ecdsa_key.pub}' | base64 -d > ssh_host_ecdsa_key.pub
      kubectl -n d8-code get secrets shell-host-keys -ojsonpath='{.data.ssh_host_ed25519_key}' | base64 -d > ssh_host_ed25519_key
      kubectl -n d8-code get secrets shell-host-keys -ojsonpath='{.data.ssh_host_ed25519_key.pub}' | base64 -d > ssh_host_ed25519_key.pub
      kubectl -n d8-code get secrets shell-host-keys -ojsonpath='{.data.ssh_host_rsa_key}' | base64 -d > ssh_host_rsa_key
      kubectl -n d8-code get secrets shell-host-keys -ojsonpath='{.data.ssh_host_rsa_key.pub}' | base64 -d > ssh_host_rsa_key.pub
    • Copy the generated files to the host with your Linux/Docker installation into the /etc/gitlab directory:

      cp ssh_host_* /etc/gitlab/
  9. After replacing the gitlab-secrets.json file, run the command:

    gitlab-ctl reconfigure
  10. Once the command completes, start the restore process:

gitlab-backup restore BACKUP=<timestamp>

For example:

gitlab-backup restore BACKUP=1742909494_2025_03_25_17.8.1
  1. After restoring, restart GitLab and check its status:

    gitlab-ctl restart
    gitlab-rake gitlab:check SANITIZE=true
  2. Verify there are no pending migrations:

gitlab-rake db:migrate:status