|
|
Пример конфигурации модуля
| An example of the module configuration
|
yaml
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: prometheus
spec:
version: 2
enabled: true
settings:
auth:
password: xxxxxx
retentionDays: 7
storageClass: rbd
nodeSelector:
node-role/example: “”
tolerations:
- key: dedicated
operator: Equal
value: example
| yaml
apiVersion: deckhouse.io/v1alpha1
kind: ModuleConfig
metadata:
name: prometheus
spec:
version: 2
enabled: true
settings:
auth:
password: xxxxxx
retentionDays: 7
storageClass: rbd
nodeSelector:
node-role/example: “”
tolerations:
- key: dedicated
operator: Equal
value: example
|
Запись данных Prometheus в longterm storage
| Writing Prometheus data to the longterm storage
|
У Prometheus есть поддержка remote_write данных из локального Prometheus в отдельный longterm storage (например, VictoriaMetrics). В Deckhouse поддержка данного механизма реализована с помощью custom resource PrometheusRemoteWrite .
| Prometheus supports remote_write’ing data from the local Prometheus to a separate longterm storage (e.g., VictoriaMetrics). In Deckhouse, this mechanism is implemented using the PrometheusRemoteWrite custom resource.
|
Для VictoriaMetrics подробную информацию о способах передачи данные в vmagent можно получить в документации VictoriaMetrics.
| For VictoriaMetrics detailed information about how to send data to vmagent can be found in the VictoriaMetrics documentation.
|
Пример минимального PrometheusRemoteWrite
| Example of the basic PrometheusRemoteWrite
|
yaml
apiVersion: deckhouse.io/v1
kind: PrometheusRemoteWrite
metadata:
name: test-remote-write
spec:
url: https://victoriametrics-test.domain.com/api/v1/write
| yaml
apiVersion: deckhouse.io/v1
kind: PrometheusRemoteWrite
metadata:
name: test-remote-write
spec:
url: https://victoriametrics-test.domain.com/api/v1/write
|
Пример расширенного PrometheusRemoteWrite
| Example of the expanded PrometheusRemoteWrite
|
yaml
apiVersion: deckhouse.io/v1
kind: PrometheusRemoteWrite
metadata:
name: test-remote-write
spec:
url: https://victoriametrics-test.domain.com/api/v1/write
basicAuth:
username: username
password: password
writeRelabelConfigs:
- sourceLabels: [name]
action: keep
regex: prometheus_build_.|my_cool_app_metrics_.
- sourceLabels: [name]
action: drop
regex: my_cool_app_metrics_with_sensitive_data
| yaml
apiVersion: deckhouse.io/v1
kind: PrometheusRemoteWrite
metadata:
name: test-remote-write
spec:
url: https://victoriametrics-test.domain.com/api/v1/write
basicAuth:
username: username
password: password
writeRelabelConfigs:
- sourceLabels: [name]
action: keep
regex: prometheus_build_.|my_cool_app_metrics_.
- sourceLabels: [name]
action: drop
regex: my_cool_app_metrics_with_sensitive_data
|
Подключение Prometheus к сторонней Grafana
| Connecting Prometheus to an external Grafana instance
|
У каждого ingress-nginx-controller есть сертификаты, при указании которых в качестве клиентских будет разрешено подключение к Prometheus. Все, что нужно, — создать дополнительный Ingress -ресурс.
| Each ingress-nginx-controller has certificates that can be used to connect to Prometheus. All you need is to create an additional Ingress resource.
|
В приведенном ниже примере предполагается, что Secret example-com-tls уже существует в namespace d8-monitoring.
Имена для Ingress my-prometheus-api и Secret my-basic-auth-secret указаны для примера. Замените их на более подходящие в вашем случае.
|
For the example below, it is presumed that Secret example-com-tls already exist in namespace d8-monitoring.
Names for Ingress my-prometheus-api and Secret my-basic-auth-secret are there for example. Change them to the most suitable names for your case.
|
yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-prometheus-api
namespace: d8-monitoring
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: my-basic-auth-secret
nginx.ingress.kubernetes.io/app-root: /graph
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_ssl_certificate /etc/nginx/ssl/client.crt;
proxy_ssl_certificate_key /etc/nginx/ssl/client.key;
proxy_ssl_protocols TLSv1.2;
proxy_ssl_session_reuse on;
spec:
ingressClassName: nginx
rules:
- host: prometheus-api.example.com
http:
paths:
- backend:
service:
name: prometheus
port:
name: https
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- prometheus-api.example.com
secretName: example-com-tls
—
apiVersion: v1
kind: Secret
metadata:
name: my-basic-auth-secret
namespace: d8-monitoring
type: Opaque
data:
Строка basic-auth хешируется с помощью htpasswd.
auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK # foo:bar
| yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-prometheus-api
namespace: d8-monitoring
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: my-basic-auth-secret
nginx.ingress.kubernetes.io/app-root: /graph
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_ssl_certificate /etc/nginx/ssl/client.crt;
proxy_ssl_certificate_key /etc/nginx/ssl/client.key;
proxy_ssl_protocols TLSv1.2;
proxy_ssl_session_reuse on;
spec:
ingressClassName: nginx
rules:
- host: prometheus-api.example.com
http:
paths:
- backend:
service:
name: prometheus
port:
name: https
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- prometheus-api.example.com
secretName: example-com-tls
—
apiVersion: v1
kind: Secret
metadata:
name: my-basic-auth-secret
namespace: d8-monitoring
type: Opaque
data:
Basic-auth string is hashed using htpasswd.
auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK # foo:bar
|
Далее остается только добавить data source в Grafana:
| Next, you only need to add the data source to Grafana:
|
В качестве URL необходимо указать https://prometheus-api.<домен-вашего-кластера>
| Set https://prometheus-api.<cluster-domain> as the URL.
|

| 
|
- Basic-авторизация не является надежной мерой безопасности. Рекомендуется ввести дополнительные меры безопасности, например указать аннотацию
nginx.ingress.kubernetes.io/whitelist-source-range .
|
- Note that basic authorization is not sufficiently secure and safe. You are encouraged to implement additional safety measures, e.g., attach the
nginx.ingress.kubernetes.io/whitelist-source-range annotation.
|
- Огромный минус подключения таким способом — необходимость создания Ingress-ресурса в системном namespace’е.
Deckhouse не гарантирует сохранение работоспособности данной схемы подключения в связи с его активными постоянными обновлениями.
|
- A considerable disadvantage of this method is the need to create an Ingress resource in the system namespace.
Deckhouse does not guarantee the functionality of this connection method due to its regular updates.
|
- Этот Ingress-ресурс может быть использован для доступа к Prometheus API не только для Grafana, но и для других интеграций, например для федерации Prometheus.
|
- This Ingress resource can be used to access the Prometheus API not only from Grafana but for other integrations, e.g., the Prometheus federation.
|
Подключение стороннего приложения к Prometheus
| Connecting an external app to Prometheus
|
Подключение к Prometheus защищено с помощью kube-rbac-proxy. Для подключения понадобится создать ServiceAccount с необходимыми правами.
| The connection to Prometheus is protected using kube-rbac-proxy. To connect, you need to create a ServiceAccount with the necessary permissions.
|
yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: app
namespace: default
—
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: app:prometheus-access
rules:
- apiGroups: [“monitoring.coreos.com”]
resources: [“prometheuses/http”]
resourceNames: [“main”, “longterm”]
verbs: [“get”]
—
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: app:prometheus-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: app:prometheus-access
subjects:
- kind: ServiceAccount
name: app
namespace: default
| yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: app
namespace: default
—
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: app:prometheus-access
rules:
- apiGroups: [“monitoring.coreos.com”]
resources: [“prometheuses/http”]
resourceNames: [“main”, “longterm”]
verbs: [“get”]
—
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: app:prometheus-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: app:prometheus-access
subjects:
- kind: ServiceAccount
name: app
namespace: default
|
Далее сделаем запрос, используя curl :
| Next, define the following job containing the curl request:
|
yaml
apiVersion: batch/v1
kind: Job
metadata:
name: app-curl
namespace: default
spec:
template:
metadata:
name: app-curl
spec:
serviceAccountName: app
containers:
- name: app-curl
image: curlimages/curl:7.69.1
command: [“sh”, “-c”]
args:
-
-
curl -H “Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)” -k -f
https://prometheus.d8-monitoring:9090/api/v1/query_range?query=up\&start=1584001500\&end=1584023100\&step=30
restartPolicy: Never
backoffLimit: 4
| yaml
apiVersion: batch/v1
kind: Job
metadata:
name: app-curl
namespace: default
spec:
template:
metadata:
name: app-curl
spec:
serviceAccountName: app
containers:
- name: app-curl
image: curlimages/curl:7.69.1
command: [“sh”, “-c”]
args:
-
-
curl -H “Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)” -k -f
https://prometheus.d8-monitoring:9090/api/v1/query_range?query=up\&start=1584001500\&end=1584023100\&step=30
restartPolicy: Never
backoffLimit: 4
|
Job должен завершиться успешно.
| The job must complete successfully.
|
Отправка алертов в Telegram
| Sending alerts to Telegram
|
Alertmanager поддерживает прямую отправку алертов в Telegram.
| Alertmanager supports sending alerts to Telegram directly.
|
Создайте Secret в пространстве имен d8-monitoring :
| Create the Secret in the d8-monitoring namespace:
|
yaml
apiVersion: v1
kind: Secret
metadata:
name: telegram-bot-secret
namespace: d8-monitoring
stringData:
token: “562696849:AAExcuJ8H6z4pTlPuocbrXXXXXXXXXXXx”
| yaml
apiVersion: v1
kind: Secret
metadata:
name: telegram-bot-secret
namespace: d8-monitoring
stringData:
token: “562696849:AAExcuJ8H6z4pTlPuocbrXXXXXXXXXXXx”
|
Задеплойте custom resource CustomAlertManager :
| Deploy CustomAlertManager CR:
|
yaml
apiVersion: deckhouse.io/v1alpha1
kind: CustomAlertmanager
metadata:
name: telegram
spec:
type: Internal
internal:
receivers:
- name: telegram
telegramConfigs:
- botToken:
name: telegram-bot-secret
key: token
chatID: -30490XXXXX
route:
groupBy:
- job
groupInterval: 5m
groupWait: 30s
receiver: telegram
repeatInterval: 12h
| yaml
apiVersion: deckhouse.io/v1alpha1
kind: CustomAlertmanager
metadata:
name: telegram
spec:
type: Internal
internal:
receivers:
- name: telegram
telegramConfigs:
- botToken:
name: telegram-bot-secret
key: token
chatID: -30490XXXXX
route:
groupBy:
- job
groupInterval: 5m
groupWait: 30s
receiver: telegram
repeatInterval: 12h
|
Поля token в Secret’е и chatID в ресурсе CustomAlertmanager необходимо поставить свои. Подробнее о Telegram API.
| The fields token in the Secret and chatID in the CustomAlertmanager custom resource must be set on your own. Read more about Telegram API.
|
Пример отправки алертов в Slack с фильтром
| Example of sending alerts to Slack with a filter
|
yaml
apiVersion: deckhouse.io/v1alpha1
kind: CustomAlertmanager
metadata:
name: slack
spec:
internal:
receivers:
- name: devnull
- name: slack
slackConfigs:
- apiURL:
key: apiURL
name: slack-apiurl
channel: {{ dig .Values.werf.env .Values.slack.channel._default .Values.slack.channel }}
fields:
- short: true
title: Severity
value: ‘{{
{{ .CommonLabels.severity_level }} }}’
- short: true
title: Status
value: ‘{{
{{ .Status }} }}’
- title: Summary
value: ‘{{
{{ range .Alerts }} }}{{{{ .Annotations.summary }} }} {{{{ end }} }}’
- title: Description
value: ‘{{
{{ range .Alerts }} }}{{{{ .Annotations.description }} }} {{{{ end }} }}’
- title: Labels
value: ‘{{
{{ range .Alerts }} }} {{{{ range .Labels.SortedPairs }} }}{{{{ printf "%s:
%s\n" .Name .Value }} }}{{{{ end }} }}{{{{ end }} }}’
- title: Links
value: ‘{{
{{ (index .Alerts 0).GeneratorURL }} }}’
title: ‘{{{{ .CommonLabels.alertname }} }}’
route:
groupBy:
- ‘…’
receiver: devnull
routes:
- matchers:
- matchType: =~
name: severity_level
value: “^[4-9]$”
receiver: slack
repeatInterval: 12h
type: Internal
| yaml
apiVersion: deckhouse.io/v1alpha1
kind: CustomAlertmanager
metadata:
name: slack
spec:
internal:
receivers:
- name: devnull
- name: slack
slackConfigs:
- apiURL:
key: apiURL
name: slack-apiurl
channel: {{ dig .Values.werf.env .Values.slack.channel._default .Values.slack.channel }}
fields:
- short: true
title: Severity
value: ‘{{
{{ .CommonLabels.severity_level }} }}’
- short: true
title: Status
value: ‘{{
{{ .Status }} }}’
- title: Summary
value: ‘{{
{{ range .Alerts }} }}{{{{ .Annotations.summary }} }} {{{{ end }} }}’
- title: Description
value: ‘{{
{{ range .Alerts }} }}{{{{ .Annotations.description }} }} {{{{ end }} }}’
- title: Labels
value: ‘{{
{{ range .Alerts }} }} {{{{ range .Labels.SortedPairs }} }}{{{{ printf "%s:
%s\n" .Name .Value }} }}{{{{ end }} }}{{{{ end }} }}’
- title: Links
value: ‘{{
{{ (index .Alerts 0).GeneratorURL }} }}’
title: ‘{{{{ .CommonLabels.alertname }} }}’
route:
groupBy:
- ‘…’
receiver: devnull
routes:
- matchers:
- matchType: =~
name: severity_level
value: “^[4-9]$”
receiver: slack
repeatInterval: 12h
type: Internal
|
Пример отправки алертов в Opsgenie
| Example of sending alerts to Opsgenie
|
yaml
- name: opsgenie
opsgenieConfigs:
- apiKey:
key: data
name: opsgenie
description: |
{{ range .Alerts }}{{ .Annotations.summary }} {{ end }}
{{ range .Alerts }}{{ .Annotations.description }} {{ end }}
message: ‘{{ .CommonLabels.alertname }}’
priority: P1
responders:
- id: team_id
type: team
| yaml
- name: opsgenie
opsgenieConfigs:
- apiKey:
key: data
name: opsgenie
description: |
{{ range .Alerts }}{{ .Annotations.summary }} {{ end }}
{{ range .Alerts }}{{ .Annotations.description }} {{ end }}
message: ‘{{ .CommonLabels.alertname }}’
priority: P1
responders:
- id: team_id
type: team
|
|
|