Preliminary version. The functionality may change, but the basic features will be preserved. Compatibility with future versions is ensured, but may require additional migration actions.
Security
How we secure communication between components?
We use TLS encryption for all internal communications between GitLab services within the Kubernetes cluster. We also ensure that any external access is also secured with HTTPS or other secure protocol.
How we secure metrics scraping?
We secure metrics scraping by using kube-rbac-proxy sidecar container alongside Kubernetes RBAC. Kube-rbac-proxy acts as an authentication and authorization layer, ensuring that only requests with valid permissions can access the metrics endpoints.
What TLS encryption is supported?
- TLS 1.2 or higher is required for all incoming and outgoing TLS connections.
- TLS certificates must have at least 112 bits of security. RSA, DSA, and DH keys shorter than 2048 bits, and ECC keys shorter than 224 bits are considered insecure and prohibited.
Update policy
- Every change of module major version can change Gitlab major version (ex. 17 -> 18)
- Every change of module minor version can change Gitlab major or patch version (ex. 17.3 -> 17.4, 17.3.0 -> 17.3.6)
- You can see the full list of correspondence between module versions and Gitlab versions in section Description
Gitaly related topics
Why service so privileged?
Gitaly uses cgroup
mechanism to control resources consumption during operations over git repositories.
To be able to work in Kubernetes, we should allow pod to write to its cgroup on host path /sys/fs/cgroup
Capabilities:
SETUID
,SETGID
,CHOWN
- Used by init container to set usergit:git
in pod cgroupFOWNER
- to bypass checks when git data directory has owner mismatchSYS_ADMIN
- required for mounting cgroupfs and working with cgroup hierarchiesSYS_RESOURCE
- required to work with cgroup resources limits (CPU, Memory, IO)SYS_PTRACE
- to make syscalls likeprocess_vm_readv
/process_vm_writev
KILL
- required for terminating or signaling processes in managed cgroups
How to refresh Gitaly replica?
Cases:
- Refill data on Gitaly node after PV recreation
- Update manually node when data is out-of-date
To refresh specific Gitaly node run:
kubectl exec -i -t -n d8-code praefect-0 -c praefect -- praefect -config /etc/gitaly/config.toml verify --virtual-storage <virtual_storage> --storage <gitaly_pod_name>
All repositories data on <gitaly_pod_name>
will be marked as unverified to prioritize reverification. Reverification
runs asynchronously in the background.
Network configuration
How to use ownLoadBalancer with reserved public IPV4 address in Yandex Cloud
To assign a reserved public IPv4 address to your ownLoadBalancer
, simply specify the following label when creating or updating your resource.
Example:
network:
ownLoadBalancer:
annotations:
yandex.cpi.flant.com/listener-address-ipv4: xx.xx.xx.xx
enabled: true
Important: The specified IPv4 address must be pre-reserved in your Yandex Cloud project via the Console or CLI.
Module deletion
You can fully cleanup cluster from module in 2 steps:
- disable it following the steps as for any other module in Deckhouse Kubernetes Platform
- annotate moduleConfig with
modules.deckhouse.io/allow-disable: "true"
to bypass deckhouse-controller errors - change
enable
flag in moduleСonfig fromtrue
tofalse
- delete namespace as it may have some secrets/configmaps left:
kubectl delete ns d8-code
Please keep in mind to save
secrets/rails-secret
prior to deletion, otherwise you would we unable to fully restore from existing backups in future