The module is available only in Deckhouse Enterprise Edition.
Preliminary Steps
Several preparatory steps are required prior to installing the Deckhouse Observability Platform.
Creating ModuleUpdatePolicy
First of all, you have to define the module update policy, which sets the parameters and sources for updating the observability-platform
module. You can do so using the ModuleUpdatePolicy
resource.
-
Create the
dop-mup.yaml
file with the following content:--- apiVersion: deckhouse.io/v1alpha1 kind: ModuleUpdatePolicy metadata: name: observability-platform spec: moduleReleaseSelector: labelSelector: matchExpressions: - key: module operator: In values: - observability-platform - key: source operator: In values: - deckhouse releaseChannel: Alpha # Specify the desired update channel: Alpha, Beta, or Stable. update: mode: Auto # Specify the desired update mode: Auto or Manual.
-
Note that you have to:
- insert the desired update channel in the
releaseChannel
field; - choose the update mode in the
mode
field (Auto or Manual).
- insert the desired update channel in the
-
Apply the settings by running the command:
kubectl apply -f dop-mup.yaml
For more information about the module update policy and available parameters, refer to the Deckhouse documentation.
Enabling the operator-ceph
You must activate the operator-ceph
module to ensure that the Deckhouse Observability Platform operates as it should. This is a mandatory dependency. Depending on the selected settings and configurations, you may need to activate other modules of the Deckhouse Kubernetes Platform as well. You can learn more about this in the corresponding documentation sections.
- Create the
dop-ceph-mc.yaml
file with the following content:--- apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: operator-ceph spec: enabled: true
- Apply the settings by running the command:
kubectl apply -f dop-ceph-mc.yaml
Enabling additional modules
operator-postgres
This module deploys PostgreSQL databases in a cluster running the Deckhouse Kubernetes Platform. You are required to install the module if you plan to use a PostgreSQL database running in the cluster for the Deckhouse Observability Platform.
- Create the
dop-postgres-mc.yaml
file with the following content:--- apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: operator-postgres spec: enabled: true
- Apply the settings by running the command:
kubectl apply -f dop-postgres-mc.yaml
sds
This module is necessary for creating persistent volumes (PVs) based on physical disks. It is used to establish long-term storage using Ceph. You will need to activate it if the Deckhouse Observability Platform is deployed on the Deckhouse Kubernetes Platform running on bare metal or virtual machines for which cloud PVs cannot be provisioned.
-
Create the
dop-sds-mc.yaml
file with the following content:--- apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: sds-node-configurator spec: enabled: true version: 1 --- apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: sds-local-volume spec: enabled: true version: 1
-
Apply the settings by running the command:
kubectl apply -f dop-sds-mc.yaml
Configuring Node Groups (nodegroups) and Labels
The components of the Deckhouse Observability Platform are automatically assigned to nodes based on the names of the node groups (nodegroups) and the labels attached to them. The selection algorithm proceeds as follows: first, node groups with matching names are selected, then node groups with matching labels are selected. If no suitable nodes are found, the component startup is delayed until the necessary node groups are created or the necessary labels are attached to the existing groups.
Naming Node Groups
The minimum configuration requires defining a node group with the name observability
. All platform components will be run on the nodes belonging to this group. You can create additional node groups for more granular component distribution. Possible options are:
observability
– for user interface (UI) components and others if a separate group is not created for them.observability-ceph
– for deploying Ceph components.observability-metrics
– for deploying components responsible for metrics collection and processing.observability-logs
– for deploying components responsible for logs collection and processing.
Configuring Node Groups Using Labels
You can also distribute components by using labels. Attach the dedicated/observability
: "" label to the target node group. Possible label options are:
dedicated/observability: ""
– for user interface (UI) components and others if a separate group is not created for them.dedicated/observability-ceph: ""
– for deploying Ceph components.dedicated/observability-metrics: ""
– for deploying components responsible for metrics collection and processing.dedicated/observability-logs: ""
– for deploying components responsible for logs collection and processing.
Setting Labels on an Existing Node Group
To attach a label to an existing node group, use the following command:
kubectl patch ng worker -p '{"spec":{"nodeTemplate": {"labels": {"dedicated/observability": ""}}}}' --type=merge
This command will add the necessary labels ensuring the correct distribution of components across the nodes.
Configuring StorageClass for Stateful Components
The Deckhouse Observability Platform components that operate in a stateful mode require persistent storage (PVs). In cases where the Deckhouse Observability Platform is deployed on clusters running on bare-metal or virtual machines with no option to use cloud PVs, it is recommended to use LocalPathProvisioner
. It enables the local disks of cluster nodes to be used to create PVs, providing fast and efficient management of local storage.
Configuring LocalPathProvisioner
You have to configure LocalPathProvisioner
for some stateful Deckhouse Observability Platform components that use Persistent Volumes (PVs) to work properly.
Preparing the Block Device
Ensure that a block device of the required size is mounted in the /opt/local-path-provisioner
directory on all nodes of the node group where the DOP components will be deployed. This device will be used for data storage, so it is crucial to ensure its volume meets your requirements.
Creating and Applying LocalPathProvisioner
-
Create the
local-path-provisioner.yaml
file with the following content:--- apiVersion: deckhouse.io/v1alpha1 kind: LocalPathProvisioner metadata: name: localpath-node spec: nodeGroups: - observability path: /opt/local-path-provisioner reclaimPolicy: Delete
-
Apply the settings by running the command:
kubectl apply -f local-path-provisioner.yaml
LocalPathProvisioner
should be created for all node groups that match by name or label. For example, these could be observability-metrics
, observability-logs
, or any node groups with the dedicated/observability.*
label. Make sure to specify all the necessary node groups in the nodeGroups field.
Configuring Storage Class for Long-term Storage Components
The long-term storage components of the Deckhouse Observability Platform rely on persistent volumes (PV). In situations where Deckhouse Observability Platform is deployed on the Deckhouse Kubernetes Platform running on bare metal or virtual machines without access to cloud PVs, you must use the sds
module. This module is disabled by default and must be activated according to the instructions in the section on connecting the required modules.
-
Attach the label to the node group for deploying Ceph components:
Execute the command:
kubectl patch ng worker -p '{"spec":{"nodeTemplate": {"labels": {"storage.deckhouse.io/sds-local-volume-node": ""}}}}' --type=merge
This command will add the necessary labels required for
sds-local-volume
. -
Check whether the block devices are available:
Execute the command:
kubectl get bd
Ensure that there are devices with the
consumable=true
flag in the output. For example:NAME NODE CONSUMABLE SIZE PATH AGE dev-44587ffa2c48e7e403db6abc699cc3b809489c1d dop-bare-metal01 true 331093016Ki /dev/sda 3h dev-5f61dadefe049b620c6fc5433046cf02a80247a0 dop-bare-metal02 true 331093016Ki /dev/sda 3h2m dev-665628749db2e1c93a74f3c224bb98502111fdd6 dop-bare-metal03 true 331093016Ki /dev/sda 175m
-
Create
LvmVolumeGroups
from the available block devices on all nodes:Execute the following command:
kubectl get bd -o json | jq '.items | map(select(.status.consumable == true)) | reduce .[] as $bd ({}; .[$bd.status.nodeName] += [$bd.metadata.name] | .) | to_entries | reduce .[] as {$key, $value} ({apiVersion: "v1", kind: "List", items: []}; .items += [{apiVersion: "storage.deckhouse.io/v1alpha1", kind: "LvmVolumeGroup", metadata: {name: ("dop-ceph-" + $key)}, spec: {type: "Local", actualVGNameOnTheNode: "dop-ceph", blockDeviceNames: $value}}])' | kubectl apply -f -
Check whether the creation has been successful with:
kubectl get lvg
Sample output:
NAME HEALTH NODE SIZE ALLOCATED SIZE VG AGE dop-ceph-dop-bare-metal01 Operational dop-bare-metal01 329054Mi 0 dop-ceph 37s dop-ceph-dop-bare-metal02 Operational dop-bare-metal02 329054Mi 0 dop-ceph 37s dop-ceph-dop-bare-metal03 Operational dop-bare-metal03 329054Mi 0 dop-ceph 37s
-
Create
LocalStorageClass
:Execute the command:
kubectl get lvg -o json | jq 'reduce .items[].metadata.name as $name ({apiVersion: "storage.deckhouse.io/v1alpha1", kind: "LocalStorageClass", metadata: {name: "dop-ceph"}, spec: {reclaimPolicy: "Delete", volumeBindingMode: "WaitForFirstConsumer", lvm: {type: "Thick"}}}; .spec.lvm.lvmVolumeGroups += [{name: $name}])' | kubectl apply -f -
-
Confirm that StorageClass has been successfully created:
Ensure that there is a StorageClass named
dop-ceph
in the output of the command:kubectl get sc
Installation Using External Authentication
The Deckhouse Observability Platform integrates with various third-party authentication systems. Keycloak and Okta are among the systems supported out of the box. To connect to other systems such as LDAP or GitLab, use an intermediary solution called Dex, a part of the Deckhouse Kubernetes Platform. The complete list of supported systems can be found in the documentation: List of Supported Systems.
Connecting Dex for Authentication
To connect, for example, LDAP, you first need to create a DexProvider. This will allow the system to correctly interact with the third-party authentication service. Let’s consider an example of setting up a connection to LDAP:
-
Create the configuration file
dop-dex-provider.yaml
with the following content. Note that in the example, the specified data must be replaced with real values from your system:apiVersion: deckhouse.io/v1 kind: DexProvider metadata: name: dop-active-directory spec: displayName: Active Directory ldap: bindDN: cn=admin,dc=example,dc=org bindPW: admin groupSearch: baseDN: ou=dop,dc=example,dc=org filter: (objectClass=groupOfNames) nameAttr: cn userMatchers: - groupAttr: member userAttr: DN host: ad.example.com::389 insecureNoSSL: true insecureSkipVerify: true startTLS: false userSearch: baseDN: ou=dop,dc=example,dc=org emailAttr: mail filter: (objectClass=inetOrgPerson) idAttr: uidNumber nameAttr: cn username: mail usernamePrompt: Email Address type: LDAP
-
Apply the settings by running the command:
kubectl apply -f dop-dex-provider.yaml
Additional examples of provider settings can be found in the Deckhouse Observability Platform documentation.
Important: Currently, it is not possible to configure an explicit mapping between DexProvider and the specific applications that use it. If DexProviders are already configured in your Deckhouse Kubernetes Platform, users of Deckhouse Observability Platform will see them in the list of available authentication methods.
Installation in Cloud Environments
This example uses a PostgreSQL database deployed in a cluster. Enable the operator-postgres
module to ensure proper functionality. Follow the steps below to deploy the platform to Yandex Cloud.
Steps for Installation in Yandex Cloud
-
Prepare the configuration file
Create the
dop-mc.yaml
file with the settings. Ensure that all the changeable parameters, such as the domain, resource quantities, and identifiers (random-string
), are replaced with the actual values for your environment:apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: observability-platform spec: enabled: true settings: general: baseDomain: dop.example.com clusterName: dc1 tls: issuer: letsencrypt storage: ceph: configOverride: | [osd] osd_memory_cache_min = 1Gi bluestore_cache_autotune = true bluestore_min_alloc_size = 4096 osd_pool_default_pg_autoscale_mode = off mon: storageClass: network-ssd-nonreplicated storageSize: 10Gi osd: count: 3 storageClass: network-ssd-nonreplicated storageSize: 80Gi metrics: defaultStorageClass: network-ssd etcd: storageSize: 5Gi ingester: resources: limits: memory: 4Gi requests: cpu: 1 memory: 4Gi storageSize: 10Gi storeGateway: storageSize: 10Gi ui: auth: mode: default clusterBootstrapToken: random-string postgres: backup: enabled: false internal: resources: limits: cpu: "1" memory: 1Gi requests: cpu: 500m memory: 1Gi storage: class: network-ssd size: 10Gi mode: Internal secretKeyBase: random-string tenantHashSalt: random-string version: 1
-
Apply the configuration
Apply the settings to the Deckhouse Kubernetes Platform cluster:
kubectl apply -f dop-mc.yaml
-
Installation outcome
The Deckhouse Observability Platform will be deployed in the Deckhouse Kubernetes Platform with storage enabled for metrics. All the necessary resources will be provisioned in the cloud, and the database will run in the cluster.
Installing on Bare-metal Servers
Follow the steps below to use the platform on bare metal servers with no option to provision cloud PVs. You will need to activate the sds
module, configure StorageClass, and set up LocalPath Provisioner.
Follow these steps:
-
Create and configure the
dop-mc.yaml
file. Remember to insert values from your environment:apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: observability-platform spec: enabled: true settings: general: baseDomain: dop.example.com clusterName: dc1 tls: issuer: letsencrypt storage: ceph: configOverride: | [osd] osd_memory_cache_min = 1Gi bluestore_cache_autotune = true bluestore_min_alloc_size = 4096 osd_pool_default_pg_autoscale_mode = off mon: storageClass: dop-ceph storageSize: 10Gi osd: count: 3 storageClass: dop-ceph storageSize: 80Gi metrics: defaultStorageClass: localpath-node etcd: storageSize: 5Gi ingester: resources: limits: memory: 4Gi requests: cpu: 1 memory: 4Gi storageSize: 10Gi storeGateway: storageSize: 10Gi ui: auth: mode: default clusterBootstrapToken: random-string postgres: backup: enabled: false external: db: dop-db host: db.local port: "5432" user: user password: password mode: External secretKeyBase: random-string tenantHashSalt: random-string version: 1
-
Apply the settings using the command:
kubectl apply -f dop-mc.yaml
The platform will be deployed with the storage for metrics enabled and the associated resources used.
Checking Functionality
-
Ensure that all pods in
d8-observability-platform
are inRunning
state:kubectl -n d8-observability-platform get po
-
Check if the user interface (UI) is available at
https://dop.example.com
wheredomain.my
is the cluster DNS name template specified inpublicDomainTemplate
.Use theese credentials to log in:
- Username:
admin@deckhouse.ru
; - Password:
password
.
Note that you will need to change the password upon the first login.
- Username: