The module deploys log collector agents on nodes in the cluster. The purpose of these agents is to do minimal transformations and send logs further. Each agent is a vector instance running with a configuration file generated by Deckhouse.

log-shipper architecture

  1. Deckhouse is watching ClusterLoggingConfig, ClusterLogDestination and PodLoggingConfig custom resources. The combination of a logging source and log destination is called pipeline.
  2. Deckhouse generates a configuration file and stores it into Kubernetes Secret.
  3. Secret is mounted to all log-shipper agent Pods and the configuration is reloaded on changes by the reloader sidecar container.

Deployment topologies

This module deploys only agents on nodes. However, it is implied that logs are sent from the cluster using one of the following topologies.

Distributed

Agents send logs directly to the storage, e.g., Loki, Elasticsearch.

log-shipper distributed

  • Less complicated scheme to use.
  • Available out of the box without any external dependency besides storage.
  • Complicated transformations consume more resources.

Centralized

All logs are aggregated by one of the available aggregation destinations, e.g., Logstash, Vector. Agents on nodes do minimal transformations and try to send logs from nodes faster with less resource consumption. Complicated mappings are applied on the aggregator’s side.

log-shipper centralized

  • Fewer resources are used on worker nodes.
  • Users can configure any possible mappings for aggregators and send logs to many more storages.
  • Dedicated nodes for aggregates can be scaled up and down on loading changes.

Stream

The main goal of this architecture is to send messages to the queue system as quickly as possible, then other workers will read them and deliver them to the long-term storage for later analysis.

log-shipper stream

  • The same pros and cons as for centralized architecture, yet one more middle layer storage is added.
  • Increased durability. Suites for all infrastructures where logs delivery is crucial.

Metadata

On collecting, all sources enrich logs with metadata. The enrichment takes place at the Source stage.

Kubernetes

The following metadata fields will be exposed:

Label Pod spec path
pod metadata.name
namespace metadata.namespace
pod_labels metadata.labels
pod_ip status.podIP
image spec.containers[].image
container spec.containers[].name
node spec.nodeName
pod_owner metadata.ownerRef[0]
Label Node spec path
node_group metadata.labels[].node.deckhouse.io/group

Splunk destination does not use pod_labels, because it is a nested object with keys and values.

File

The only exposed label is host, which is equal to a node hostname.

Log filters

There are a couple of filters to reduce the number of lines sent to the destination — log filter and label filter.

log-shipper pipeline

They are executed right after concatenating lines together with the multiline log parser.

  1. label filter — rules are executed against the metadata of a message. Fields in metadata (or labels) come from a source, so for different sources, we will have different fields for filtering. These rules are useful, for example, to drop messages from a particular container and for Pods with/without a label.
  2. log filter — rules are executed against a message. It is possible to drop messages based on their JSON fields or, if a message is not JSON-formatted, use regex to exclude lines.

Both filters have the same structured configuration:

  • field — the source of data to filter (most of the time it is a value of a label or a JSON field).
  • operator — action to apply to a value of the field. Possible options are In, NotIn, Regex, NotRegex, Exists, DoesNotExist.
  • values — this option has a different meanings for different operations:
    • DoesNotExist, Exists — not supported;
    • In, NotIn — a value of a field must / mustn’t be in the list of provided values;
    • Regex, NotRegex — a value of a field must match any or mustn’t match all the provided regexes (values).

You can find examples in the Examples section of the documentation.

Extra labels are added on the Destination stage of the pipeline, so it is impossible to run queries against them.