Preliminary version. The functionality may change, but the basic features will be preserved. Compatibility with future versions is ensured, but may require additional migration actions.

Scaling is based on CodeInstance parameter scaling.targetUsercount — according to its value, module changes HPA, PDB, CPU and memory requests/limits

Warning! Any “burstable” instance types in cloud infrastructure are not recommended due to inconsistent performance.

Architecture reference

10 Users

Single no-HA installation for demo purposes only. We do not measure load here and do not guarantee reliability / redundancy here.

Resource capacity table

All calculations are made based on scaling.targetUserCount parameter

Given table represents resources needed for single replicas of each component. For HA mode consider at least twice more capacity

Users/Component 100 300 500 1000 3000 5000
Webservice default 2CPU / 4Gb
2 worker, 8 thread
3CPU / 6Gb
3 worker, 8 thread
3CPU / 6Gb
3 worker, 8 thread
4CPU / 8Gb
4 worker, 8 thread
6CPU / 12Gb
6 worker, 8 thread
8CPU / 16Gb
8 worker, 8 thread
Webservice internal 2CPU / 4Gb
2 worker, 8 thread
2CPU / 4Gb
2 worker, 8 thread
2CPU / 4Gb
2 worker, 8 thread
2CPU / 4Gb
2 worker, 8 thread
3CPU / 6Gb
3 worker, 8 thread
4CPU / 8Gb
4 worker, 8 thread
Sidekiq 1CPU / 1.5Gb
1.5 CPU / 3Gb
1CPU / 1.5Gb
1.5CPU / 3Gb
1CPU / 1.5Gb
1.5CPU / 3Gb
1CPU / 1.5Gb
1.5CPU / 3Gb
1CPU / 1.5Gb
1.5CPU / 3Gb
1CPU / 1.5Gb
1.5CPU / 3Gb
Shell 0.01CPU / 24Mb
0.5 CPU / 600Mb
0.01CPU / 24Mb
0.5 CPU / 600Mb
0.01CPU / 24Mb
0.5 CPU / 600Mb
0.01CPU / 24Mb
0.5 CPU / 600Mb
0.01CPU / 24Mb
0.5CPU / 600Mb
0.01CPU / 24Mb
0.5CPU / 600Mb
Toolbox 0.05CPU / 350Mb
1 CPU / 2Gb
0.05CPU / 350Mb
1CPU / 2Gb
0.05CPU / 350Mb
1CPU / 2Gb
0.05CPU / 350Mb
1CPU / 2Gb
0.05CPU / 350Mb
1CPU / 2Gb
0.05CPU / 350Mb
1CPU / 2Gb
Praefect 0.1CPU / 128Mb
0.3 CPU / 600Mb
0.1CPU / 128Mb
0.3CPU / 600Mb
0.1CPU / 128Mb
0.3CPU / 600Mb
0.1CPU / 128Mb
0.3CPU / 600Mb
0.1CPU / 128Mb
0.6CPU / 1200Mb
0.1CPU / 128Mb
2CPU / 2Gb
Gitaly 1.5CPU / 2Gb
1.5CPU / 2Gb
1.5CPU / 2Gb
1.5CPU / 2Gb
1.5CPU / 2Gb
1.5CPU / 2Gb
2CPU / 4Gb
2CPU / 4Gb
6CPU / 16Gb
6CPU / 16Gb
8CPU / 30Gb
8CPU / 30Gb
MRA 0.5CPU / 250Mb
1CPU / 500Mb
0.5CPU / 250Mb
1CPU / 500Mb
0.5CPU / 250Mb
1CPU / 500Mb
0.5CPU / 250Mb
1CPU / 500Mb
0.5CPU / 250Mb
1CPU / 500Mb
0.5CPU / 250Mb
1CPU / 500Mb
Code-operator 0.02CPU / 128Mb
1CPU / 256Mb
0.02CPU / 128Mb
1CPU / 256Mb
0.02CPU / 128Mb
1CPU / 256Mb
0.02CPU / 128Mb
1CPU / 256Mb
0.02CPU / 128Mb
1CPU / 256Mb
0.02CPU / 128Mb
1CPU / 256Mb
Registry* 1 CPU/ 1Gb
1.5CPU / 2Gb
1CPU / 1Gb
1.5CPU / 2Gb
1 CPU/ 1Gb
1.5CPU / 2Gb
1CPU / 1Gb
1.5CPU / 2Gb
1CPU / 1Gb
1.5CPU / 2Gb
2CPU / 4Gb
4CPU / 8Gb
Pages* 0.9CPU / 1Gb
1.5CPU / 2Gb
0.9CPU / 1Gb
1.5CPU / 2Gb
0.9CPU / 1Gb
1.5CPU / 2Gb
0.9CPU / 1Gb
1.5CPU / 2Gb
0.9CPU / 1Gb
1.5CPU / 2Gb
0.9CPU / 2Gb
2CPU / 4Gb
Mailroom* 0.05CPU / 150Mb
0.25CPU / 0.5Gb
0.05CPU / 150Mb
0.25 CPU / 0.5Gb
0.05CPU / 150Mb
0.25CPU / 0.5Gb
0.05CPU / 150Mb
0.25CPU / 0.5Gb
0.05CPU / 150Mb
0.25CPU / 0.5Gb
0.05CPU / 150Mb
0.25CPU / 0.5Gb
HAProxy* 0.25CPU / 128Mb
0.5CPU / 256Mb
0.25CPU / 128Mb
0.5CPU / 256Mb
0.25CPU / 128Mb
0.5CPU / 256Mb
0.25CPU / 128Mb
0.5CPU / 256Mb
0.25CPU / 128Mb
0.5CPU / 256Mb
0.25CPU / 128Mb
0.5CPU / 256Mb
Total(Min components) 7CPU / 13Gb
11CPU / 17Gb
8CPU / 15.5Gb
12CPU / 19.5Gb
8CPU / 15.5Gb
12CPU / 19.5Gb
9.5CPU / 17.5Gb
13CPU / 21.5Gb
16CPU / 36.5Gb
20.5CPU / 42Gb
18.5CPU / 56Gb
21.5CPU / 63Gb
Total(All components) 9CPU / 15Gb
14CPU / 22Gb
10CPU / 17.5Gb
15CPU / 24Gb
10CPU / 17.5Gb
15CPU / 24Gb
11.5CPU / 19.5Gb
16CPU / 26Gb
18.5CPU / 39Gb
24.5CPU / 47Gb
25CPU / 62.5Gb
33.5CPU / 75Gb

Optional components are marked with * The first line in each cell stands for kubernetes requests, the second - for limits

Autoscaling

Almost all components support horizontal autoscaling when HA-mode is enabled. All components have at least 2 replicas. Maximum replicas described in table below:

Users/Component 1000 3000 5000
Webservice default 3 4 6
Webservice internal 3 3 3
Sidekiq 4 8 12
Shell 4 6 12
Registry* 10 10 10
Pages* 3 6 6

Optional components are marked with *

Tuning Git storage

If you see a non-empty ‘OOM events’ table in Grafana or a firing GitalyCgroupMemoryOOM alert in Prometheus, you likely need to adjust the memory and CPU resources for Gitaly. Increase the spec.gitData.resources in your configuration ( e.g., set memory to 16Gi and CPU to 4). After updating, apply the changes and monitor Gitaly for improvements.

When scaling Git storage, the following parameter precedence must be observed:

  1. spec.gitData.resources
  2. spec.scaling.targetUserCount

Example 1:

`spec.gitData.resources.cpu`: 1  
`spec.gitData.resources.memory`: 1Gi  
`spec.scaling.targetUserCount`: 3000

Based on the resource table described above, we would expect:

  • memory: 4Gi
  • cpu: 2

However, due to parameter precedence, the actual result will be:

  • memory: 1Gi
  • cpu: 1

💡 Note: When a parameter is explicitly set in spec.gitData.resources, it always takes precedence over the automatic calculation.


Example 2:

`spec.gitData.resources.cpu`: 5  
`spec.gitData.resources.memory`: ***intentionally omitted***  
`spec.scaling.targetUserCount`: 3000

Based on the resource table described above, we would expect:

  • memory: 4Gi
  • cpu: 2

However, because of parameter precedence—and since spec.gitData.resources.memory was not specified—the actual result will be:

  • memory: 4Gi (from the scaling table)
  • cpu: 5 (from spec.gitData.resources.cpu)

💡 Important: If any resource (CPU or memory) is not specified in spec.gitData.resources, the value from the scaling table is applied.