Overview
This module deploys log collector agents on nodes in the cluster. The purpose of these agents is to do minimal transformations and send logs further. Each agent is a vector instance running with a configuration file generated by Deckhouse.
- Deckhouse is watching
ClusterLoggingConfig
,ClusterLogsDestination
andPodLoggingConfig
custom resources. The combination of a logging source and log destination is calledpipeline
. - Deckhouse generates a configuration file and stores it into Kubernetes
Secret
. Secret
is mounted to all log-shipper agent Pods and the configuration is reloaded on changes by thereloader
sidecar container.
Deployment topologies
This module deploys only agents on nodes. However, it is implied that logs are sent from the cluster using one of the following topologies.
Distributed
Agents send logs directly to the storage, e.g., Loki, Elasticsearch.
- Less complicated scheme to use.
- Available out of the box without any external dependency besides storage.
- Complicated transformations consume more resources.
Centralized
All logs are aggregated by one of the available aggregation destinations, e.g., Logstash, Vector. Agents on nodes do minimal transformations and try to send logs from nodes faster with less resource consumption. Complicated mappings are applied on the aggregator’s side.
- Fewer resources are used on worker nodes.
- Users can configure any possible mappings for aggregators and send logs to many more storages.
- Dedicated nodes for aggregates can be scaled up and down on loading changes.
Log filters
There is a couple of filters to reduce the number of lines sent to the destination — log filter
and label filter
.
They are executed right after concatenating lines together with the multiline log parser.
label filter
— rules are executed against the metadata of a message. Fields in metadata (or labels) come from a source, so for different sources, we will have different fields for filtering. These rules are useful, for example, to drop messages from a particular container and for Pods with/without a label.log filter
— rules are executed against a message. It is possible to drop messages based on their JSON fields or, if a message is not JSON-formatted, use regex to exclude lines.
Both filters have the same structured configuration:
field
— the source of data to filter (most of the time it is a value of a label or a JSON field).operator
— action to apply to a value of the field. Possible options are In, NotIn, Regex, NotRegex, Exists, DoesNotExist.values
— this option has a different meanings for different operations:- DoesNotExist, Exists — not supported;
- In, NotIn — a value of a field must / mustn’t be in the list of provided values;
- Regex, NotRegex — a value of a field must match any or mustn’t match all the provided regexes (values).
You can find examples in the Examples section of the documentation.
NOTE: Extra labels are added on the
Destination
stage of the pipeline, so it is impossible to run queries against them.