The module lifecycle stagePreview
The module has requirements for installation

TrinoClass

TrinoClass is a cluster-wide resource that prevents invalid configurations from being created and allows pre-defining certain values. Every Trino resource must be associated with an existing TrinoClass. Before a service is deployed, the entire configuration is validated against the corresponding TrinoClass.

Sizing Policies

Sizing policies let you define a set of allowed resource configurations for associated Trino instances. This helps avoid uneven CPU and memory distribution across cluster nodes. The policy is selected by finding the one whose cores range contains the requested core count. Once a policy is matched, all other fields are validated against it.

spec:
  sizingPolicies:
    - cores:
        min: 1
        max: 4
      memory:
        min: 100Mi
        max: 1Gi
        step: 1Mi
      coreFraction: [10%, 30%, 50%]
    - cores:
        min: 5
        max: 10
      memory:
        min: 500Mi
        max: 2Gi
      coreFraction: [50%, 70%, 100%]

Validation Rules

CEL (Common Expression Language) is used for flexible custom validation rules. The following pre-defined variables are available in rule expressions:

  • instance.memory.size int
  • instance.cpu.cores int
spec:
  validations:
    - message: "instance.memory.size should be more than 2Gi"
      rule: "instance.memory.size > 2 * 1024 * 1024 * 1024"

Default Values

The Trino Operator automatically computes default configuration values based on the following logic.

Memory and CPU Parameter Calculation

The operator derives key Trino tuning parameters from the pod’s resource limits (instance.memory.size and instance.cpu.cores).

Memory (instance.memory.size, default 8 GiB):

Trino parameter Formula Description
-Xmx (JVM heap) 80% × memory The remaining 20% is left for the OS and JVM off-heap structures
memory.heap-headroom-per-node 30% × Xmx Reserved for internal allocations not tracked by Trino (readers, writers, etc.)
query.max-memory-per-node Xmx − heap-headroom Memory available for query execution on a single node
query.max-memory = query.max-memory-per-node In standalone mode the cluster has exactly one node, so the limit equals the per-node value

CPU (instance.cpu.cores, default 4):

Trino parameter Formula Description
task.concurrency Largest power of 2 ≤ cores Trino requires a power of 2; e.g. 6 cores → 4

Fixed Configuration Parameters

config.properties — parameters that do not depend on resources:

coordinator=true
node-scheduler.include-coordinator=true
http-server.http.port=8080
discovery.uri=http://localhost:8080

node.properties:

node.environment=production
node.data-dir=/data/trino
plugin.dir=/usr/lib/trino/plugin
node.version=<version bundled in the image>

jvm.config — JVM flags (order is fixed; Trino is sensitive to flag sequence):

-server
-agentpath:/usr/lib/trino/bin/libjvmkill.so
-Xmx<calculated>M
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+ExplicitGCInvokesConcurrent
-XX:+HeapDumpOnOutOfMemoryError
-XX:+ExitOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-XX:ReservedCodeCacheSize=512M
-XX:PerMethodRecompilationCutoff=10000
-XX:PerBytecodeRecompilationCutoff=10000
-Djdk.attach.allowAttachSelf=true
-Djdk.nio.maxCachedBufferSize=2000000
-XX:+EnableDynamicAgentLoading
-XX:+UnlockDiagnosticVMOptions
-XX:G1NumCollectionsKeepPinned=10000000

Calculation Example: 16 GiB / 6 CPU

Parameter Value
-Xmx 16 GiB × 0.8 = 12 GiB (12288 MiB)
memory.heap-headroom-per-node 12288 × 0.3 ≈ 3686 MiB
query.max-memory-per-node 12288 − 3686 = 8602 MiB
query.max-memory 8602 MiB
task.concurrency 4 (largest power of 2 ≤ 6)

Affinity

Standard Kubernetes mechanism for controlling pod scheduling.

spec:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: "node.deckhouse.io/group"
          operator: "In"
          values:
          - "trino"

Tolerations

Standard Kubernetes mechanism for controlling pod scheduling.

spec:
  tolerations:
  - key: primary-role
    operator: Equal
    value: trino
    effect: NoSchedule

Node Selector

Standard Kubernetes mechanism for controlling pod scheduling.

spec:
  nodeSelector:
    "node.deckhouse.io/group": "trino"

Usage Examples

Basic Usage

apiVersion: managed-services.deckhouse.io/v1alpha1
kind: TrinoClass
spec:
  sizingPolicies:
    - cores:
        min: 2
        max: 4
      memory:
        min: 2Gi
        max: 8Gi
        step: 1Gi
      coreFractions:
        - "25%"
        - "50%"
        - "75%"
        - "100%"
    - cores:
        min: 5
        max: 8
      memory:
        min: 8Gi
        max: 16Gi
        step: 1Gi
      coreFractions:
        - "25%"
        - "50%"
        - "75%"
        - "100%"

  validations:
    - message: "instance.memory.size should be more than 2Gi"
      rule: "instance.memory.size > 2 * 1024 * 1024 * 1024"