Preliminary version. The functionality may change, but the basic features will be preserved. Compatibility with future versions is ensured, but may require additional migration actions.

Scaling is based on CodeInstance parameter scaling.targetUsercount — according to its value, module changes HPA, PDB, CPU and memory requests/limits

Warning! Any “burstable” instance types in cloud infrastructure are not recommended due to inconsistent performance.

Architecture reference

10 Users

Single no-HA installation for demo purposes only. We do not measure load here and do not guarantee reliability / redundancy here.

Resource capacity table

All calculations are made based on scaling.targetUserCount parameter

Given table represents resources needed for single replicas of each component. For HA mode consider at least twice more capacity

Users/Component 100 300 500 1000
Webservice default 2CPU / 4Gb
2 worker, 8 thread
3CPU / 6Gb
3 worker, 8 thread
3CPU / 6Gb
3 worker, 8 thread
4CPU / 8Gb
4 worker, 8 thread
Webservice internal 2CPU / 4Gb
2 worker, 8 thread
2CPU / 4Gb
2 worker, 8 thread
2CPU / 4Gb
2 worker, 8 thread
2CPU / 4Gb
2 worker, 8 thread
Sidekiq 0.25CPU / 1.5Gb
1.5 CPU / 3Gb
0.25CPU / 1.5Gb
1.5CPU / 3Gb
0.25CPU / 1.5Gb
1.5CPU / 3Gb
0.25CPU / 1.5Gb
1.5CPU / 3Gb
Shell 0.01CPU / 24Mb
0.5 CPU / 600Mb
0.01CPU / 24Mb
0.5 CPU / 600Mb
0.01CPU / 24Mb
0.5 CPU / 600Mb
0.01CPU / 24Mb
0.5 CPU / 600Mb
Toolbox 0.05CPU / 350Mb
1 CPU / 2Gb
0.05CPU / 350Mb
1CPU / 2Gb
0.05CPU / 350Mb
1CPU / 2Gb
0.05CPU / 350Mb
1CPU / 2Gb
Praefect 0.1CPU / 128Mb
0.3 CPU / 600Mb
0.1CPU / 128Mb
0.3CPU / 600Mb
0.1CPU / 128Mb
0.3CPU / 600Mb
0.1CPU / 128Mb
0.3CPU / 600Mb
Gitaly 1.5CPU / 2Gb
1.5CPU / 2Gb
1.5CPU / 2Gb
1.5CPU / 2Gb
1.5CPU / 2Gb
1.5CPU / 2Gb
2CPU / 4Gb
2CPU / 4Gb
MRA 0.5CPU / 250Mb
1CPU / 500Mb
0.5CPU / 250Mb
1CPU / 500Mb
0.5CPU / 250Mb
1CPU / 500Mb
0.5CPU / 250Mb
1CPU / 500Mb
Code-operator 0.02CPU / 128Mb
1CPU / 256Mb
0.02CPU / 128Mb
1CPU / 256Mb
0.02CPU / 128Mb
1CPU / 256Mb
0.02CPU / 128Mb
1CPU / 256Mb
Registry* 1 CPU/ 1Gb
1.5CPU / 2Gb
1CPU / 1Gb
1.5CPU / 2Gb
1 CPU/ 1Gb
1.5CPU / 2Gb
1CPU / 1Gb
1.5CPU / 2Gb
Pages* 0.9CPU / 1Gb
1.5CPU / 2Gb
0.9CPU / 1Gb
1.5CPU / 2Gb
0.9CPU / 1Gb
1.5CPU / 2Gb
0.9CPU / 1Gb
1.5CPU / 2Gb
Mailroom* 0.05CPU / 150Mb
0.25CPU / 0.5Gb
0.05CPU / 150Mb
0.25 CPU / 0.5Gb
0.05CPU / 150Mb
0.25CPU / 0.5Gb
0.05CPU / 150Mb
0.25CPU / 0.5Gb
HAProxy* 0.25CPU / 128Mb
0.5CPU / 256Mb
0.25CPU / 128Mb
0.5CPU / 256Mb
0.25CPU / 128Mb
0.5CPU / 256Mb
0.25CPU / 128Mb
0.5CPU / 256Mb
Total(Min components) 7CPU / 13Gb
11CPU / 17Gb
8CPU / 15.5Gb
12CPU / 19.5Gb
8CPU / 15.5Gb
12CPU / 19.5Gb
9.5CPU / 17.5Gb
13CPU / 21.5Gb
Total(All components) 9CPU / 15Gb
14CPU / 22Gb
10CPU / 17.5Gb
15CPU / 24Gb
10CPU / 17.5Gb
15CPU / 24Gb
11.5CPU / 19.5Gb
16CPU / 26Gb

Optional components are marked with * The first line in each cell stands for kubernetes resources, the second - for limits

Tunning Git storage

If you see a non-empty ‘OOM events’ table in Grafana or a firing GitalyCgroupMemoryOOM alert in Prometheus, you likely need to adjust the memory and CPU resources for Gitaly. Increase the instanceSpec.gitData.resources in your configuration (e.g., set memory to 16Gi and CPU to 4). After updating, apply the changes and monitor Gitaly for improvements.