Чтение логов из всех подов кластера и направление их в Loki | Getting logs from all cluster Pods and sending them to Loki |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: all-logs spec: type: KubernetesPods destinationRefs:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: all-logs spec: type: KubernetesPods destinationRefs:
|
Чтение логов подов из указанного namespace с указанным label и перенаправление одновременно в Loki и Elasticsearch | Reading Pod logs from a specified namespace with a specified label and redirecting to Loki and Elasticsearch |
Чтение логов подов из namespace | Reading logs from |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: whispers-booking-logs spec: type: KubernetesPods kubernetesPods: namespaceSelector: matchNames:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: whispers-booking-logs spec: type: KubernetesPods kubernetesPods: namespaceSelector: matchNames:
|
Создание source в namespace и чтение логов всех подов в этом NS с направлением их в Loki | Creating a source in namespace and reading logs of all Pods in that NS with forwarding them to Loki |
Следующий pipeline создает source в namespace | Namespaced pipeline - reading logs from |
yaml apiVersion: deckhouse.io/v1alpha1 kind: PodLoggingConfig metadata: name: whispers-logs namespace: tests-whispers spec: clusterDestinationRefs:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: PodLoggingConfig metadata: name: whispers-logs namespace: tests-whispers spec: clusterDestinationRefs:
|
Чтение только подов в указанном namespace и с определенным label | Reading only Pods in the specified namespace and having a certain label |
Пример чтения только подов, имеющих label | Read logs from Pods with label |
yaml apiVersion: deckhouse.io/v1alpha1 kind: PodLoggingConfig metadata: name: whispers-logs namespace: tests-whispers spec: labelSelector: matchLabels: app: booking clusterDestinationRefs:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: PodLoggingConfig metadata: name: whispers-logs namespace: tests-whispers spec: labelSelector: matchLabels: app: booking clusterDestinationRefs:
|
Переход с Promtail на Log-Shipper | Migration from Promtail to Log-Shipper |
В ранее используемом URL Loki требуется убрать путь | Path |
Vector сам добавит этот путь при работе с Loki. | Vector will add this PATH automatically during working with Loki destination. |
Работа с Grafana Cloud | Working with Grafana Cloud |
Данная документация подразумевает, что у вас уже создан ключ API. | This documentation expects that you have created API key. |
Для начала вам потребуется закодировать в base64 ваш токен доступа к Grafana Cloud. | |
Firstly you should encode your token with base64. | |
bash
echo -n “ | bash
echo -n “ |
Затем нужно создать ClusterLogDestination | Then you can create ClusterLogDestination |
yaml
apiVersion: deckhouse.io/v1alpha1
kind: ClusterLogDestination
metadata:
name: loki-storage
spec:
loki:
auth:
password: PFlPVVItR1JBRkFOQUNMT1VELVRPS0VOPg==
strategy: Basic
user: “ | yaml
apiVersion: deckhouse.io/v1alpha1
kind: ClusterLogDestination
metadata:
name: loki-storage
spec:
loki:
auth:
password: PFlPVVItR1JBRkFOQUNMT1VELVRPS0VOPg==
strategy: Basic
user: “ |
Теперь можно создать PodLogginConfig или ClusterPodLoggingConfig и отправлять логи в Grafana Cloud. | Now you can create PodLogginConfig or ClusterPodLoggingConfig and send logs to Grafana Cloud. |
Добавление Loki в Deckhouse Grafana | Adding Loki source to Deckhouse Grafana |
Вы можете работать с Loki из встроенной в Deckhouse Grafana. Достаточно добавить GrafanaAdditionalDatasource. | You can work with Loki from embedded to deckhouse Grafana. Just add GrafanaAdditionalDatasource |
yaml apiVersion: deckhouse.io/v1 kind: GrafanaAdditionalDatasource metadata: name: loki spec: access: Proxy basicAuth: false jsonData: maxLines: 5000 timeInterval: 30s type: loki url: http://loki.loki:3100 | yaml apiVersion: deckhouse.io/v1 kind: GrafanaAdditionalDatasource metadata: name: loki spec: access: Proxy basicAuth: false jsonData: maxLines: 5000 timeInterval: 30s type: loki url: http://loki.loki:3100 |
Поддержка Elasticsearch < 6.X | Elasticsearch < 6.X usage |
Для Elasticsearch < 6.0 нужно включить поддержку doc_type индексов. Сделать это можно следующим образом: | For Elasticsearch < 6.0 doc_type indexing should be set. Config should look like this: |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: es-storage spec: type: Elasticsearch elasticsearch: endpoint: http://192.168.1.1:9200 docType: “myDocType” # Укажите значение здесь. Оно не должно начинаться с ‘_’. auth: strategy: Basic user: elastic password: c2VjcmV0IC1uCg== | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: es-storage spec: type: Elasticsearch elasticsearch: endpoint: http://192.168.1.1:9200 docType: “myDocType” # Set any string here. It should not start with ‘_’ auth: strategy: Basic user: elastic password: c2VjcmV0IC1uCg== |
Шаблон индекса для Elasticsearch | Index template for Elasticsearch |
Существует возможность отправлять сообщения в определенные индексы на основе метаданных с помощью шаблонов индексов: | It is possible to route logs to particular indexes based on metadata using index templating: |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: es-storage spec: type: Elasticsearch elasticsearch: endpoint: http://192.168.1.1:9200 index: “k8s-{{ namespace }}-%F” | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: es-storage spec: type: Elasticsearch elasticsearch: endpoint: http://192.168.1.1:9200 index: “k8s-{{ namespace }}-%F” |
В приведенном выше примере для каждого пространства имен Kubernetes будет создан свой индекс в Elasticsearch. | For the above example for each Kubernetes namespace a dedicated index in Elasticsearch will be created. |
Эта функция также хорошо работает в комбинации с | This feature works well combining with |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: es-storage spec: type: Elasticsearch elasticsearch: endpoint: http://192.168.1.1:9200 index: “k8s-{{ service }}-{{ namespace }}-%F” extraLabels: service: “{{ service_name }}” | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: es-storage spec: type: Elasticsearch elasticsearch: endpoint: http://192.168.1.1:9200 index: “k8s-{{ service }}-{{ namespace }}-%F” extraLabels: service: “{{ service_name }}” |
|
|
Пример интеграции со Splunk | Splunk integration |
Существует возможность отсылать события из Deckhouse в Splunk. | It is possible to send logs from Deckhouse to Splunk. |
|
|
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: splunk spec: type: Splunk splunk: endpoint: https://prd-p-xxxxxx.splunkcloud.com:8088 token: xxxx-xxxx-xxxx index: logs tls: verifyCertificate: false verifyHostname: false | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: splunk spec: type: Splunk splunk: endpoint: https://prd-p-xxxxxx.splunkcloud.com:8088 token: xxxx-xxxx-xxxx index: logs tls: verifyCertificate: false verifyHostname: false |
| Splunk destination doesn’t support pod labels for indexes. Consider exporting necessary labels with the |
yaml extraLabels: pod_label_app: ‘{{ pod_labels.app }}’ | yaml extraLabels: pod_label_app: ‘{{ pod_labels.app }}’ |
Простой пример Logstash | Simple Logstash example |
Чтобы отправлять логи в Logstash, на стороне Logstash должен быть настроен входящий поток | To send logs to Logstash, the |
Пример минимальной конфигурации Logstash: | An example of the minimal Logstash configuration: |
hcl input { tcp { port => 12345 codec => json } } output { stdout { codec => json } } | hcl input { tcp { port => 12345 codec => json } } output { stdout { codec => json } } |
Пример манифеста | An example of the |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: logstash spec: type: Logstash logstash: endpoint: logstash.default:12345 | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: logstash spec: type: Logstash logstash: endpoint: logstash.default:12345 |
Syslog | Syslog |
Следующий пример показывает, как отправлять сообщения через сокет по протоколу TCP в формате syslog: | The following examples sets severity for the syslog messages and uses the socket destination: |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: rsyslog spec: type: Socket socket: mode: TCP address: 192.168.0.1:3000 encoding: codec: Syslog extraLabels: syslog.severity: “alert” поле request_id должно присутствовать в сообщении syslog.message_id: “{{ request_id }}” | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: rsyslog spec: type: Socket socket: mode: TCP address: 192.168.0.1:3000 encoding: codec: Syslog extraLabels: syslog.severity: “alert” the request_id field should be present in the log message syslog.message_id: “{{ request_id }}” |
Пример интеграции с Graylog | Graylog integration |
Убедитесь, что в Graylog настроен входящий поток для приема сообщений по протоколу TCP на указанном порту. Пример манифеста для интеграции с Graylog: | Make sure that an incoming stream is configured in Graylog to receive messages over the TCP protocol on the specified port. Example manifest for integration with Graylog: |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: test-socket2-dest spec: type: Socket socket: address: graylog.svc.cluster.local:9200 mode: TCP encoding: codec: GELF | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: test-socket2-dest spec: type: Socket socket: address: graylog.svc.cluster.local:9200 mode: TCP encoding: codec: GELF |
Логи в CEF формате | Logs in CEF Format |
Существует способ формировать логи в формате CEF, используя | There is a way to format logs in CEF format using |
В примере ниже | In the example below, |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: siem-kafka spec: extraLabels: cef.name: ‘{{ app }}’ cef.severity: ‘{{ log_level }}’ type: Kafka kafka: bootstrapServers:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: siem-kafka spec: extraLabels: cef.name: ‘{{ app }}’ cef.severity: ‘{{ log_level }}’ type: Kafka kafka: bootstrapServers:
|
Так же можно вручную задать свои значения: | You can also manually set your own values: |
yaml extraLabels: cef.name: ‘TestName’ cef.severity: ‘1’ | yaml extraLabels: cef.name: ‘TestName’ cef.severity: ‘1’ |
Сбор событий Kubernetes | Collect Kubernetes Events |
События Kubernetes могут быть собраны log-shipper’ом, если | Kubernetes Events can be collected by log-shipper if |
Включите events-exporter, изменив параметры модуля | Enable |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: extended-monitoring spec: version: 1 settings: events: exporterEnabled: true | yaml apiVersion: deckhouse.io/v1alpha1 kind: ModuleConfig metadata: name: extended-monitoring spec: version: 1 settings: events: exporterEnabled: true |
Выложите в кластер следующий | Apply the following |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: kubernetes-events spec: type: KubernetesPods kubernetesPods: labelSelector: matchLabels: app: events-exporter namespaceSelector: matchNames:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: kubernetes-events spec: type: KubernetesPods kubernetesPods: labelSelector: matchLabels: app: events-exporter namespaceSelector: matchNames:
|
Фильтрация логов | Log filters |
Пользователи могут фильтровать логи, используя следующие фильтры: | Users can filter logs by applying two filters: |
|
|
Сборка логов только для контейнера
| Collect only logs of the
|
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: nginx-logs spec: type: KubernetesPods labelFilter:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: nginx-logs spec: type: KubernetesPods labelFilter:
|
Сборка логов без строки, содержащей
| Collect logs without strings
|
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: all-logs spec: type: KubernetesPods destinationRefs:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: all-logs spec: type: KubernetesPods destinationRefs:
|
Аудит событий kubelet’а | Audit of kubelet actions |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: kubelet-audit-logs spec: type: File file: include:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: kubelet-audit-logs spec: type: File file: include:
|
Системные логи Deckhouse | Deckhouse system logs |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: system-logs spec: type: File file: include:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: system-logs spec: type: File file: include:
|
Если вам нужны только логи одного пода или малой группы подов, постарайтесь использовать настройки | If you need logs from only one or from a small group of a Pods, try to use the kubernetesPods settings to reduce the number of reading filed. Do not use highly grained filters to read logs from a single pod. |
Настройка сборки логов с продуктовых пространств имен, используя опцию namespace label selector | Collect logs from production namespaces using the namespace label selector option |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: production-logs spec: type: KubernetesPods kubernetesPods: namespaceSelector: labelSelector: matchLabels: environment: production destinationRefs:
| yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLoggingConfig metadata: name: production-logs spec: type: KubernetesPods kubernetesPods: namespaceSelector: labelSelector: matchLabels: environment: production destinationRefs:
|
Исключение подов и пространств имён, используя label | Exclude Pods or namespaces with a label |
Существует преднастроенный label для исключения определенных подов и пространств имён: | There is a preconfigured label to exclude particular namespaces or Pods: |
yamlapiVersion: v1 kind: Namespace metadata: name: test-namespace labels: log-shipper.deckhouse.io/exclude: “true” — apiVersion: apps/v1 kind: Deployment metadata: name: test-deployment spec: … template: metadata: labels: log-shipper.deckhouse.io/exclude: “true” | yamlapiVersion: v1 kind: Namespace metadata: name: test-namespace labels: log-shipper.deckhouse.io/exclude: “true” — apiVersion: apps/v1 kind: Deployment metadata: name: test-deployment spec: … template: metadata: labels: log-shipper.deckhouse.io/exclude: “true” |
Включение буферизации | Enable Buffering |
Настройка буферизации логов необходима для улучшения надежности и производительности системы сбора логов. Буферизация может быть полезна в следующих случаях: | The log buffering configuration is essential for improving the reliability and performance of the log collection system. Buffering can be useful in the following cases: |
|
|
|
|
|
|
Пример включения буферизации в оперативной памяти | Example of enabling in-memory buffering |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: loki-storage spec: buffer: memory: maxEvents: 4096 type: Memory type: Loki loki: endpoint: http://loki.loki:3100 | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: loki-storage spec: buffer: memory: maxEvents: 4096 type: Memory type: Loki loki: endpoint: http://loki.loki:3100 |
Пример включения буферизации на диске | Example of enabling disk buffering |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: loki-storage spec: buffer: disk: maxSize: 1Gi type: Disk type: Loki loki: endpoint: http://loki.loki:3100 | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: loki-storage spec: buffer: disk: maxSize: 1Gi type: Disk type: Loki loki: endpoint: http://loki.loki:3100 |
Пример определения поведения при переполнении буфера | Example of defining behavior when the buffer is full |
yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: loki-storage spec: buffer: disk: maxSize: 1Gi type: Disk whenFull: DropNewest type: Loki loki: endpoint: http://loki.loki:3100 | yaml apiVersion: deckhouse.io/v1alpha1 kind: ClusterLogDestination metadata: name: loki-storage spec: buffer: disk: maxSize: 1Gi type: Disk whenFull: DropNewest type: Loki loki: endpoint: http://loki.loki:3100 |
Более подробное описание параметров доступно в ресурсе ClusterLogDestination. | More detailed description of the parameters is available in the ClusterLogDestination resource. |