The module lifecycle stage: General Availability
Migration from Omnibus to Code
Notice: Before step 11 operator can be in failure state
What backup archive does not include
- Omnibus backup archives do not include
/etc/gitlab/gitlab.rband/etc/gitlab/gitlab-secrets.json. - Omnibus backup archives do not include SSH and TLS host keys.
- For Omnibus, object storage data is not included in the archive automatically.
Back up and migrate these data types separately (steps below already include this flow for object storage, secrets, and SSH keys).
-
If GitLab uses local file storage for Docker Registry, you need to migrate the data to an S3 bucket that connects to Code, according to this guide:
aws --endpoint-url <https://your-object-storage-backend.com> s3 sync registry s3://mybucket- If you are using the
s3cmdutility, run the commands3cmd --configureto set up the connection configuration for S3. Then, use the following command to transfer data to the S3 bucket that is connected to Code:wheres3cmd sync registry s3://mybucketregistryis the path to the Docker Registry storage folder in the file system.
- If you are using the
-
Blob objects that use S3 Object Storage for data storage should be migrated to S3 buckets connected to the Code Operator, according to this guide:
- Configure two connections in
rclone: old and new, where:oldis the connection to the S3 Storage linked to GitLab.newis the connection to the S3 Storage bucket in the Code Operator.
- Run the command:
rclone sync -P old:BUCKET_NAME new:BUCKET_NAME
- Configure two connections in
-
Create a backup archive according to this guide. If objects were migrated in step 2, add the
SKIPoption for those components. If Object Storage is not used,SKIPcan be omitted.Example:
sudo gitlab-backup create -
Upload the created backup archive to the administrator’s computer, then upload it to the S3 storage for backups specified in the
CodeInstancefor Code, under thebucketNamefield:backup: enabled: true s3: bucketName: d8-code-backup tmpBucketName: d8-code-backup-tmpThe S3 bucket
d8-code-backupshould contain the backup archive, for example:1742909494_2025_03_25_17.8.1_gitlab_backup.tar.tmpBucketNameis used as temporary storage during backup/restore operations and should be configured as a separate bucket with short retention policy. -
Create secrets in the
d8-codenamespace:Place the corresponding values into these secrets.
-
Generate the
CodeInstancemanifest and configure thebackupblock:backup: enabled: true s3: bucketName: code-backup tmpBucketName: code-backup-tmp external: accessKey: S3AccessKey provider: YCloud | Generic | AWS | AzureRM secretKey: S3SecretKey mode: External
Restore preconditions
- Source backup and target GitLab deployment must use the same GitLab version.
toolboxmust be enabled and running.- Target DB clients must be stopped before restore (next step scales workloads to zero).
- Backup archive name must stay in
<backup_id>_gitlab_backup.tarformat. - If backup was created in Omnibus, you may see errors about creating/using an old PostgreSQL role from backup and/or creating PostgreSQL extensions. These messages are usually non-critical and do not break restore or post-restore operation.
Backup/restore command matrix
- Create backup from Toolbox:
backup-utility(optionally with--skip ...selected manually). - Restore by backup ID from backup bucket:
backup-utility --restore -t <backup_id>. - Restore by URL or local file path:
backup-utility --restore -f <URL|file:///path/to/<backup_id>_gitlab_backup.tar>.
-
Scale all deployments down to 0:
kubectl -n d8-code scale --replicas=0 deploy/sidekiq-default kubectl -n d8-code scale --replicas=0 deploy/webservice-default -
Connect to the
toolboxpod on the master host of the Deckhouse cluster. Make suretoolboxis enabled. Example:/opt/deckhouse/bin/kubectl -n d8-code exec -it -c toolbox toolbox-64d4fb84cf-b7dwb -- bash -
Inside the
toolbox, run:backup-utility --restore -t <backup_timestamp>For example:
backup-utility --restore -t 1742909494_2025_03_25_17.8.1 -
To apply migrations related to Code itself, run:
cd /srv/gitlab
gitlab-rake db:migrate- Create service account for operator Run on your workstation:
SERVICE_ACCOUNT_PAT_TOKEN=$(kubectl -n d8-code get secret code-service-account -o jsonpath='{ .data.api-token }' | base64 -d)Inside toolbox, run:
cd /srv/gitlab
export SERVICE_ACCOUNT_PAT_TOKEN=<token>
gitlab-rake gitlab:generate:service_account_with_token- Scale all deployments up:
kubectl -n d8-code scale --replicas=1 deploy/sidekiq-default
kubectl -n d8-code scale --replicas=1 deploy/webservice-default-
Wait for the operator to restart the pods.
-
Verify restore and migrations are completed successfully:
kubectl -n d8-code get pods
kubectl -n d8-code exec -it -c toolbox deploy/toolbox -- gitlab-rake db:migrate:status
kubectl -n d8-code exec -it -c toolbox deploy/toolbox -- gitlab-rake gitlab:check SANITIZE=trueRestore is considered successful when all pods are Ready, there are no pending migrations, and gitlab:check finishes without critical errors.
Migration from Code to Omnibus
-
Scale all deployments down to 0:
kubectl -n d8-code scale --replicas=0 deploy/sidekiq-default kubectl -n d8-code scale --replicas=0 deploy/webservice-default -
Rollback Code migrations from the database. Enter the
toolboxpod and run the following commands from the/srv/gitlabdirectory.Note:
toolboxis an optional component (enabled by default). If thetoolboxPod is missing, make sure it is enabled inCodeInstance(spec.features.toolbox.enabled: true) and wait for reconciliation.bundle exec rake fe:db:migrations:rollback -
Use the
toolboxpod and the built-inbackup-utilitytool to create a backup archive:kubectl -n d8-code exec -it -c toolbox deploy/toolbox -- backup-utilityIf backup is triggered manually, select
--skiparguments yourself when some data classes are already migrated or handled outside the archive. -
Wait for the backup creation process to complete. Upon completion, the backup will be saved in the S3 storage specified in the
CodeInstanceunder thebackupsection:backup: enabled: true s3: bucketName: code-backup tmpBucketName: code-backup-tmp external: accessKey: S3AccessKey provider: YCloud | Generic | AWS | AzureRM secretKey: S3SecretKey mode: ExternalThe name of the created archive will follow the format
<timestamp>_gitlab_backup.tar. Example:1742909494_2025_03_25_17.8.1_gitlab_backup.tar. -
Stop the
pumaandsidekiqservices:gitlab-ctl stop puma gitlab-ctl stop sidekiqIf GitLab is deployed in Docker, run these commands inside the container.
-
Download and transfer the backup archive to the directory where GitLab backups are stored:
- By default, this is
/var/opt/gitlab/backups. - If a different directory is used, move the backup there.
- If S3 Object Storage is used for storing backups, upload the archive there.
- By default, this is
-
Restore the
gitlab-secrets.jsonfile:-
Create a copy of the current
gitlab-secrets.jsonfile from your Linux/Docker installation. To do this, copy the file/etc/gitlab/gitlab-secrets.jsonto the administrator’s host. -
Retrieve
rails-secrets.jsonfrom Code. Run the following command on the host where thekubectlcommand is executed for the Deckhouse cluster:kubectl -n d8-code get secret rails-secret-v1 -ojsonpath='{.data.secrets\.yml}' | yq '@base64d | from_yaml | .production' -o json > rails-secrets.json -
Move the created
rails-secrets.jsonfile to the same directory as thegitlab-secrets.jsonfile, and run the command:yq eval-all 'select(filename == "gitlab-secrets.json").gitlab_rails = select(filename == "rails-secrets.json") | select(filename == "gitlab-secrets.json")' -ojson gitlab-secrets.json rails-secrets.json > gitlab-secrets-updated.json -
Replace the current
gitlab-secrets.jsonfile with the generatedgitlab-secrets-updated.json. To do this, copygitlab-secrets-updated.jsonto the host/container where GitLab is running, into the/etc/gitlab/gitlab-secrets.jsondirectory:cp gitlab-secrets-updated.json /etc/gitlab/gitlab-secrets.json
-
-
Restore the SSH host keys:
-
On the host where the
kubectlcommand is executed for the Deckhouse cluster, run the following commands:kubectl -n d8-code get secrets shell-host-keys -ojsonpath='{.data.ssh_host_ecdsa_key}' | base64 -d > ssh_host_ecdsa_key kubectl -n d8-code get secrets shell-host-keys -ojsonpath='{.data.ssh_host_ecdsa_key.pub}' | base64 -d > ssh_host_ecdsa_key.pub kubectl -n d8-code get secrets shell-host-keys -ojsonpath='{.data.ssh_host_ed25519_key}' | base64 -d > ssh_host_ed25519_key kubectl -n d8-code get secrets shell-host-keys -ojsonpath='{.data.ssh_host_ed25519_key.pub}' | base64 -d > ssh_host_ed25519_key.pub kubectl -n d8-code get secrets shell-host-keys -ojsonpath='{.data.ssh_host_rsa_key}' | base64 -d > ssh_host_rsa_key kubectl -n d8-code get secrets shell-host-keys -ojsonpath='{.data.ssh_host_rsa_key.pub}' | base64 -d > ssh_host_rsa_key.pub -
Copy the generated files to the host with your Linux/Docker installation into the
/etc/gitlabdirectory:cp ssh_host_* /etc/gitlab/
-
-
After replacing the
gitlab-secrets.jsonfile, run the command:gitlab-ctl reconfigure -
Once the command completes, start the restore process:
gitlab-backup restore BACKUP=<timestamp>For example:
gitlab-backup restore BACKUP=1742909494_2025_03_25_17.8.1-
After restoring, restart GitLab and check its status:
gitlab-ctl restart gitlab-rake gitlab:check SANITIZE=true -
Verify there are no pending migrations:
gitlab-rake db:migrate:status