ClusterLogDestination
Scope: Cluster
Version: v1alpha1
Describes setting for a log storage, which you can use in many log sources.
metadata.name
— is an upstream name, which you should use in custom resource ClusterLoggingConfig.
- spec
Required value
- spec.buffer
Buffer parameters.
- spec.buffer.disk
Disk buffer parameters.
- spec.buffer.disk.maxSize
The maximum size of the buffer on disk. Must be at least ~256MB (268435488 bytes).
You can express size as a plain integer or as a fixed-point number using one of these quantity suffixes:
E
,P
,T
,G
,M
,k
,Ei
,Pi
,Ti
,Gi
,Mi
,Ki
.More about resource quantity:
Pattern:
^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
Examples:
maxSize: 512Mi
maxSize: 268435488
- spec.buffer.memory
- spec.buffer.memory.maxEvents
The maximum number of events allowed in the buffer.
- spec.buffer.type
Required value
The type of buffer to use.
Allowed values:
Disk
,Memory
- spec.buffer.whenFull
Event handling behavior when a buffer is full.
Default:
"Block"
Allowed values:
DropNewest
,Block
- spec.elasticsearch
- spec.elasticsearch.auth
- spec.elasticsearch.auth.awsAccessKey
Base64-encoded AWS
ACCESS_KEY
. - spec.elasticsearch.auth.awsAssumeRole
The ARN of an IAM role to assume at startup.
- spec.elasticsearch.auth.awsRegion
AWS region for authentication.
- spec.elasticsearch.auth.awsSecretKey
Base64-encoded AWS
SECRET_KEY
. - spec.elasticsearch.auth.password
Base64-encoded Basic authentication password.
- spec.elasticsearch.auth.strategy
The authentication strategy to use.
Default:
"Basic"
Allowed values:
Basic
,AWS
- spec.elasticsearch.auth.user
The Basic authentication user name.
- spec.elasticsearch.dataStreamEnabled
Use for storage indexes or datastreams (https://www.elastic.co/guide/en/elasticsearch/reference/master/data-streams.html).
Datastream usage is better for logs and metrics storage but they works only for Elasticsearch >= 7.16.X.
Default:
false
- spec.elasticsearch.docType
The
doc_type
for your index data. This is only relevant for Elasticsearch <= 6.X.- For Elasticsearch >= 7.X you do not need this option since this version has removed
doc_type
mapping; - For Elasticsearch >= 6.X the recommended value is
_doc
, because using it will make it easy to upgrade to 7.X; - For Elasticsearch < 6.X you can’t use a value starting with
_
or empty string. Use, for example, values likelogs
.
- For Elasticsearch >= 7.X you do not need this option since this version has removed
- spec.elasticsearch.endpoint
Required value
Base URL of the Elasticsearch instance.
- spec.elasticsearch.index
Index name to write events to.
- spec.elasticsearch.pipeline
Name of the pipeline to apply.
- spec.elasticsearch.tls
Configures the TLS options for outgoing connections.
- spec.elasticsearch.tls.caFile
Base64-encoded CA certificate in PEM format.
- spec.elasticsearch.tls.clientCrt
Configures the client certificate for outgoing connections.
- spec.elasticsearch.tls.clientCrt.crtFile
Required value
Base64-encoded certificate in PEM format.
You must also set the
keyFile
parameter. - spec.elasticsearch.tls.clientCrt.keyFile
Required value
Base64-encoded private key in PEM format (PKCS#8).
You must also set the
crtFile
parameter. - spec.elasticsearch.tls.clientCrt.keyPass
Base64-encoded pass phrase used to unlock the encrypted key file.
- spec.elasticsearch.tls.verifyCertificate
Validate the TLS certificate of the remote host. Specifically the issuer is checked but not CRLs (Certificate Revocation Lists).
Default:
true
- spec.elasticsearch.tls.verifyHostname
Verifies that the name of the remote host matches the name specified in the remote host’s TLS certificate.
Default:
true
- spec.extraLabels
A set of labels that will be attached to each batch of events.
You can use simple templating here:
{{ app }}
.There are some reserved keys:
- parsed_data
- pod
- pod_labels_*
- pod_ip
- namespace
- image
- container
- node
- pod_owner
Example:
extraLabels: forwarder: vector key: value app_info: "{{ app }}" array_member: "{{ array[0] }}" symbol_escating_value: "{{ pay\\.day }}"
- spec.kafka
- spec.kafka.bootstrapServers
Required value
A list of host and port pairs that are the addresses of the Kafka brokers in a “bootstrap” Kafka cluster that a Kafka client connects to initially to bootstrap itself.
Default:
[]
Example:
bootstrapServers: - 10.14.22.123:9092 - 10.14.23.332:9092
- Element of the array
Pattern:
^(.+)\:\d{1,5}$
- spec.kafka.encoding
How to encode the message.
- spec.kafka.encoding.codec
Default:
"JSON"
Allowed values:
JSON
,CEF
- spec.kafka.keyField
Allows to set the key_field.
Examples:
keyField: host
keyField: node
keyField: namespace
keyField: parsed_data.app_info
- spec.kafka.sasl
Configuration for SASL authentication when interacting with Kafka.
- spec.kafka.sasl.mechanism
Required value
The SASL mechanism to use. Only PLAIN and SCRAM-based mechanisms are supported.
Allowed values:
PLAIN
,SCRAM-SHA-256
,SCRAM-SHA-512
- spec.kafka.sasl.password
Required value
The SASL password.
Example:
password: qwerty
- spec.kafka.sasl.username
Required value
The SASL username.
Example:
username: username
- spec.kafka.tls
Configures the TLS options for outgoing connections.
- spec.kafka.tls.caFile
Base64-encoded CA certificate in PEM format.
- spec.kafka.tls.clientCrt
Configures the client certificate for outgoing connections.
- spec.kafka.tls.clientCrt.crtFile
Required value
Base64-encoded certificate in PEM format.
You must also set the
keyFile
parameter. - spec.kafka.tls.clientCrt.keyFile
Required value
Base64-encoded private key in PEM format (PKCS#8).
You must also set the
crtFile
parameter. - spec.kafka.tls.clientCrt.keyPass
Base64-encoded pass phrase used to unlock the encrypted key file.
- spec.kafka.tls.verifyCertificate
Validate the TLS certificate of the remote host.
Default:
true
- spec.kafka.tls.verifyHostname
Verifies that the name of the remote host matches the name specified in the remote host’s TLS certificate.
Default:
true
- spec.kafka.topic
Required value
The Kafka topic name to write events to. This parameter supports template syntax, which enables you to use dynamic per-event values.
Examples:
topic: logs
topic: logs-{{unit}}-%Y-%m-%d
- spec.logstash
- spec.logstash.endpoint
Required value
Base URL of the Logstash instance.
- spec.logstash.tls
Configures the TLS options for outgoing connections.
- spec.logstash.tls.caFile
Base64-encoded CA certificate in PEM format.
- spec.logstash.tls.clientCrt
Configures the client certificate for outgoing connections.
- spec.logstash.tls.clientCrt.crtFile
Required value
Base64-encoded certificate in PEM format.
You must also set the
keyFile
parameter. - spec.logstash.tls.clientCrt.keyFile
Required value
Base64-encoded private key in PEM format (PKCS#8).
You must also set the
crtFile
parameter. - spec.logstash.tls.clientCrt.keyPass
Base64-encoded pass phrase used to unlock the encrypted key file.
- spec.logstash.tls.verifyCertificate
Validate the TLS certificate of the remote host.
Default:
true
- spec.logstash.tls.verifyHostname
Verifies that the name of the remote host matches the name specified in the remote host’s TLS certificate.
Default:
true
- spec.loki
- spec.loki.auth
- spec.loki.auth.password
Base64-encoded Basic authentication password.
- spec.loki.auth.strategy
The authentication strategy to use.
Default:
"Basic"
Allowed values:
Basic
,Bearer
- spec.loki.auth.token
The token to use for Bearer authentication.
- spec.loki.auth.user
The Basic authentication user name.
- spec.loki.endpoint
Required value
Base URL of the Loki instance.
Agent automatically adds
/loki/api/v1/push
into URL during data transmission. - spec.loki.tenantID
ID of a tenant.
This option is used only for GrafanaCloud. When running Loki locally, a tenant ID is not required.
- spec.loki.tls
Configures the TLS options for outgoing connections.
- spec.loki.tls.caFile
Base64-encoded CA certificate in PEM format.
- spec.loki.tls.clientCrt
Configures the client certificate for outgoing connections.
- spec.loki.tls.clientCrt.crtFile
Required value
Base64-encoded certificate in PEM format.
You must also set the
keyFile
parameter. - spec.loki.tls.clientCrt.keyFile
Required value
Base64-encoded private key in PEM format (PKCS#8).
You must also set the
crtFile
parameter. - spec.loki.tls.clientCrt.keyPass
Base64-encoded pass phrase used to unlock the encrypted key file.
- spec.loki.tls.verifyCertificate
Validate the TLS certificate of the remote host.
If set to
false
, the certificate is not checked in the Certificate Revocation Lists.Default:
true
- spec.loki.tls.verifyHostname
Verifies that the name of the remote host matches the name specified in the remote host’s TLS certificate.
Default:
true
- spec.rateLimit
Parameter for limiting the flow of events.
- spec.rateLimit.excludes
List of excludes for keyField.
Only NOT matched log entries would be rate limited.
Examples:
excludes: field: tier operator: Exists
excludes: field: foo operator: NotIn values: - dev - 42 - 'true' - '3.14'
excludes: field: bar operator: Regex values: - "^abc" - "^\\d.+$"
- spec.rateLimit.excludes.field
Required value
Field name for filtering.
- spec.rateLimit.excludes.operator
Required value
Operator for log field comparations:
In
— finds a substring in a string.NotIn
— is a negative version of theIn
operator.Regex
— is trying to match regexp over the field; only log events with matching fields will pass.NotRegex
— is a negative version of theRegex
operator; log events without fields or with not matched fields will pass.Exists
— drops log event if it contains some fields.DoesNotExist
— drops log event if it does not contain some fields.
Allowed values:
In
,NotIn
,Regex
,NotRegex
,Exists
,DoesNotExist
- spec.rateLimit.excludes.values
Array of values or regexes for corresponding operations. Does not work for
Exists
andDoesNotExist
operations.Fields a with float or boolean values will be converted to strings during comparison.
- spec.rateLimit.keyField
The name of the log field whose value will be hashed to determine if the event should be rate limited.
- spec.rateLimit.linesPerMinute
Required value
The number of records per minute.
- spec.socket
- spec.socket.address
Required value
Address of the socket.
Pattern:
^.*:[1-9][0-9]+$
- spec.socket.encoding
How to encode the message.
- spec.socket.encoding.codec
Default:
"JSON"
Allowed values:
Text
,JSON
,Syslog
,CEF
,GELF
- spec.socket.mode
Required value
Allowed values:
TCP
,UDP
- spec.socket.tcp
- spec.socket.tcp.tls
Configures the TLS options for outgoing connections.
- spec.socket.tcp.tls.caFile
Base64-encoded CA certificate in PEM format.
- spec.socket.tcp.tls.clientCrt
Configures the client certificate for outgoing connections.
- spec.socket.tcp.tls.clientCrt.crtFile
Required value
Base64-encoded certificate in PEM format.
You must also set the
keyFile
parameter. - spec.socket.tcp.tls.clientCrt.keyFile
Required value
Base64-encoded private key in PEM format (PKCS#8).
You must also set the
crtFile
parameter. - spec.socket.tcp.tls.clientCrt.keyPass
Base64-encoded pass phrase used to unlock the encrypted key file.
- spec.socket.tcp.verifyCertificate
Validate the TLS certificate of the remote host.
If set to
false
, the certificate is not checked in the Certificate Revocation Lists.Default:
true
- spec.socket.tcp.verifyHostname
Verifies that the name of the remote host matches the name specified in the remote host’s TLS certificate.
Default:
true
- spec.splunk
- spec.splunk.endpoint
Required value
Base URL of the Splunk instance.
Example:
endpoint: https://http-inputs-hec.splunkcloud.com
- spec.splunk.index
Index name to write events to.
- spec.splunk.tls
Configures the TLS options for outgoing connections.
- spec.splunk.tls.caFile
Base64-encoded CA certificate in PEM format.
- spec.splunk.tls.clientCrt
Configures the client certificate for outgoing connections.
- spec.splunk.tls.clientCrt.crtFile
Required value
Base64-encoded certificate in PEM format.
You must also set the
keyFile
parameter. - spec.splunk.tls.clientCrt.keyFile
Required value
Base64-encoded private key in PEM format (PKCS#8).
You must also set the
crtFile
parameter. - spec.splunk.tls.clientCrt.keyPass
Base64-encoded pass phrase used to unlock the encrypted key file.
- spec.splunk.tls.verifyCertificate
Validate the TLS certificate of the remote host.
Default:
true
- spec.splunk.tls.verifyHostname
Verifies that the name of the remote host matches the name specified in the remote host’s TLS certificate.
Default:
true
- spec.splunk.token
Required value
Default Splunk HEC token. If an event has a token set in its metadata, it will have priority over the one set here.
- spec.type
Type of a log storage backend.
Allowed values:
Loki
,Elasticsearch
,Logstash
,Vector
,Kafka
,Splunk
,Socket
- spec.vector
- spec.vector.endpoint
Required value
An address of the Vector instance. API v2 must be used for communication between instances.
Pattern:
^(.+):([0-9]{1,5})$
- spec.vector.tls
Configures the TLS options for outgoing connections.
- spec.vector.tls.caFile
Base64-encoded CA certificate in PEM format.
- spec.vector.tls.clientCrt
Configures the client certificate for outgoing connections.
- spec.vector.tls.clientCrt.crtFile
Required value
Base64-encoded certificate in PEM format.
You must also set the
keyFile
parameter. - spec.vector.tls.clientCrt.keyFile
Required value
Base64-encoded private key in PEM format (PKCS#8).
You must also set the
crtFile
parameter. - spec.vector.tls.clientCrt.keyPass
Base64-encoded passphrase used to unlock the encrypted key file.
- spec.vector.tls.verifyCertificate
Validate the TLS certificate of the remote host.
Default:
true
- spec.vector.tls.verifyHostname
Verifies that the name of the remote host matches the name specified in the remote host’s TLS certificate.
Default:
true
ClusterLoggingConfig
Scope: Cluster
Version: v1alpha1
Describes a log source in log-pipeline.
Each custom resource ClusterLoggingConfig
describes rules for log fetching from cluster.
- spec
Required value
- spec.destinationRefs
Required value
Array of
ClusterLogDestination
custom resource names which this source will output with.Fields with float or boolean values will be converted to strings.
- spec.file
Describes a rule for collecting logs from files on a node.
- spec.file.exclude
Array of file patterns to exclude when collecting logs.
Wildcards are supported.
Examples:
exclude: "/var/log/nginx/error.log"
exclude: "/var/log/audit.log"
- spec.file.include
Array of file patterns to include.
Wildcards are supported
Examples:
include: "/var/log/*.log"
include: "/var/log/nginx/*.log"
- spec.file.lineDelimiter
String sequence used to separate one file line from another.
Example:
lineDelimiter: "\\r\\n"
- spec.kubernetesPods
Describes a rule for collecting logs from the cluster’s pods.
- spec.kubernetesPods.keepDeletedFilesOpenedFor
Specifies the time to keep deleted files opened for reading. Vector will keep pods metadata for this time as well to read logs from deleted pods. This option is useful in cases of a log storage unavailability or a network partition. Vector will keep log files opened until finally sending them to the destination.
Enabling this option may affect the resource consumption of the Vector and also flood a disk with deleted logs. Use it with caution.
The format is a string containing the time unit in hours and minutes: 30m, 1h, 2h30m, 24h.
Pattern:
^([0-9]+h([0-9]+m)?|[0-9]+m)$
- spec.kubernetesPods.labelSelector
Specifies the label selector to filter Pods with.
You can get more into here.
- spec.kubernetesPods.labelSelector.matchExpressions
List of label expressions for Pods.
Example:
matchExpressions: - key: tier operator: In values: - production - staging - key: tier operator: NotIn values: - production
- spec.kubernetesPods.labelSelector.matchExpressions.key
A label name.
- spec.kubernetesPods.labelSelector.matchExpressions.operator
A comparison operator.
Allowed values:
In
,NotIn
,Exists
,DoesNotExist
- spec.kubernetesPods.labelSelector.matchExpressions.values
A label value.
- Element of the array
Pattern:
[a-z0-9]([-a-z0-9]*[a-z0-9])?
Length:
1..63
- spec.kubernetesPods.labelSelector.matchLabels
List of labels which Pod should have.
Example:
matchLabels: foo: bar baz: who
- spec.kubernetesPods.namespaceSelector
Specifies the namespace selector to filter Pods with.
The filter can use one of the three available ways to set the condition (parameters
matchNames
,excludeNames
,labelSelector
)- spec.kubernetesPods.namespaceSelector.excludeNames
A list of namespaces, from the pods of which you need to exclude the collection of logs, but collect from the rest.
- spec.kubernetesPods.namespaceSelector.labelSelector
Specifies the label selector to filter namespaces from which logs should be collected.
You can get more into here.
- spec.kubernetesPods.namespaceSelector.labelSelector.matchExpressions
List of label expressions that a namespace should have to qualify for the filter condition.
Example:
matchExpressions: - key: tier operator: In values: - production - staging
- spec.kubernetesPods.namespaceSelector.labelSelector.matchExpressions.key
Required value
A label name.
- spec.kubernetesPods.namespaceSelector.labelSelector.matchExpressions.operator
Required value
A comparison operator.
Allowed values:
In
,NotIn
,Exists
,DoesNotExist
- spec.kubernetesPods.namespaceSelector.labelSelector.matchExpressions.values
A label value.
- spec.kubernetesPods.namespaceSelector.labelSelector.matchLabels
List of labels that a namespace should have to qualify for the filter condition.
Example:
matchLabels: foo: bar baz: who
- spec.kubernetesPods.namespaceSelector.matchNames
A list of namespaces from whose pods logs should be collected.
- spec.labelFilter
Rules to filter log lines by their metadata labels.
Example:
labelFilter: - field: container operator: In values: - nginx - field: pod_labels.tier operator: Regex values: - prod-.+ - stage-.+
- spec.labelFilter.field
Required value
Label name for filtering.
Must not be empty.
Pattern:
.+
- spec.labelFilter.operator
Required value
Operator for log field comparations:
In
— finds a substring in a string.NotIn
— is a negative version of theIn
operator.Regex
— is trying to match regexp over the field; only log events with matching fields will pass.NotRegex
— is a negative version of theRegex
operator; log events without fields or with not matched fields will pass.Exists
— drops log event if it contains some fields.DoesNotExist
— drops log event if it does not contain some fields.
Allowed values:
In
,NotIn
,Regex
,NotRegex
,Exists
,DoesNotExist
- spec.labelFilter.values
Array of values or regexes for corresponding operations. Does not work for
Exists
andDoesNotExist
operations.Fields a with float or boolean values will be converted to strings during comparison.
- spec.logFilter
A list of filters for logs that are applied to messages in JSON format.
Only matched lines would be stored to log destination.
Example:
logFilter: - field: tier operator: Exists - field: foo operator: NotIn values: - dev - 42 - 'true' - '3.14' - field: bar operator: Regex values: - "^abc" - "^\\d.+$"
- spec.logFilter.field
Required value
Field name for filtering. It should be empty for non-JSON messages.
- spec.logFilter.operator
Required value
Operator for log field comparations:
In
— finds a substring in a string.NotIn
— is a negative version of theIn
operator.Regex
— is trying to match regexp over the field; only log events with matching fields will pass.NotRegex
— is a negative version of theRegex
operator; log events without fields or with not matched fields will pass.Exists
— drops log event if it contains some fields.DoesNotExist
— drops log event if it does not contain some fields.
Allowed values:
In
,NotIn
,Regex
,NotRegex
,Exists
,DoesNotExist
- spec.logFilter.values
Array of values or regexes for corresponding operations. Does not work for
Exists
andDoesNotExist
operations.Fields a with float or boolean values will be converted to strings during comparison.
- spec.multilineParser
Multiline parser for different patterns.
- spec.multilineParser.custom
Multiline parser custom regex rules.
- spec.multilineParser.custom.endsWhen
It’s a condition to distinguish the last log line of multiline log.
- spec.multilineParser.custom.endsWhen.notRegex
Regex string, which treats as match only strings that DOESN’T match regex.
- spec.multilineParser.custom.endsWhen.regex
Regex string, which treats as match only strings that match regex.
- spec.multilineParser.custom.startsWhen
It’s a condition to distinguish the first log line of multiline log.
- spec.multilineParser.custom.startsWhen.notRegex
Regex string, which treats as match only strings that DOESN’T match regex.
- spec.multilineParser.custom.startsWhen.regex
Regex string, which treats as match only strings that match regex.
- spec.multilineParser.type
Required value
Parser types:
None
— do not parse logs.General
— tries to match general multiline logs with space or tabulation on extra lines.Backslash
— tries to match bash style logs with backslash on all lines except the last event line.LogWithTime
— tries to detect events by timestamp.MultilineJSON
— tries to match JSON logs, assuming the event starts with the{
symbol.Custom
- tries to match logs with the user provided regex inspec.multilineParser.custom
field.
Default:
"None"
Allowed values:
None
,General
,Backslash
,LogWithTime
,MultilineJSON
,Custom
- spec.type
Required value
Set on of possible input sources.
KubernetesPods
source reads logs from Kubernetes Pods.File
source reads local file from node filesystem.Allowed values:
KubernetesPods
,File
PodLoggingConfig
Scope: Namespaced
Version: v1alpha1
Custom resource for namespaced Kubernetes source.
Each custom resource PodLoggingConfig
describes rules for log fetching from specified namespace.
- spec
Required value
- spec.clusterDestinationRefs
Required value
Array of
ClusterLogDestination
custom resource names which this source will output with. - spec.keepDeletedFilesOpenedFor
Specifies the time to keep deleted files opened for reading. Vector will keep pods metadata for this time as well to read logs from deleted pods. This option is useful in cases of a log storage unavailability or a network partition. Vector will keep log files opened until finally sending them to the destination.
Enabling this option may affect the resource consumption of the Vector and also flood a disk with deleted logs. Use it with caution.
The format is a string containing the time unit in hours and minutes: 30m, 1h, 2h30m, 24h.
Pattern:
^([0-9]+h([0-9]+m)?|[0-9]+m)$
- spec.labelFilter
Rules to filter log lines by their metadata labels.
Example:
labelFilter: - field: container operator: In values: - nginx - field: pod_labels.tier operator: Regex values: - prod-.+ - stage-.+
- spec.labelFilter.field
Required value
Label name for filtering.
Must not be empty.
Pattern:
.+
- spec.labelFilter.operator
Required value
Operator for log field comparations:
In
— finds a substring in a string.NotIn
— is a negative version of theIn
operator.Regex
— is trying to match regexp over the field; only log events with matching fields will pass.NotRegex
— is a negative version of theRegex
operator; log events without fields or with not matched fields will pass.Exists
— drops log event if it contains some fields.DoesNotExist
— drops log event if it does not contain some fields.
Allowed values:
In
,NotIn
,Regex
,NotRegex
,Exists
,DoesNotExist
- spec.labelFilter.values
Array of values or regexes for corresponding operations. Does not work for
Exists
andDoesNotExist
operations.Fields with a float or boolean values will be converted to strings during comparison.
- spec.labelSelector.matchExpressions
List of label expressions for Pods.
Example:
matchExpressions: - key: tier operator: In values: - production - staging
- spec.labelSelector.matchExpressions.key
A label name.
- spec.labelSelector.matchExpressions.operator
A comparison operator.
Allowed values:
In
,NotIn
,Exists
,DoesNotExist
- spec.labelSelector.matchExpressions.values
A label value.
- Element of the array
Pattern:
[a-z0-9]([-a-z0-9]*[a-z0-9])?
Length:
1..63
- spec.labelSelector.matchLabels
List of labels which Pod should have.
Example:
matchLabels: foo: bar baz: who
- spec.logFilter
A list of filters for logs that are applied to messages in JSON format.
Only matched lines would be stored to log destination.
Example:
logFilter: - field: tier operator: Exists - field: foo operator: NotIn values: - dev - 42 - 'true' - '3.14' - field: bar operator: Regex values: - "^abc" - "^\\d.+$"
- spec.logFilter.field
Required value
Field name for filtering. It should be empty for non-JSON messages.
- spec.logFilter.operator
Required value
Operator for log field comparations:
In
— finds a substring in a string.NotIn
— is a negative version of theIn
operator.Regex
— is trying to match regexp over the field; only log events with matching fields will pass.NotRegex
— is a negative version of theRegex
operator; log events without fields or with not matched fields will pass.Exists
— drops log event if it contains some fields.DoesNotExist
— drops log event if it does not contain some fields.
Allowed values:
In
,NotIn
,Regex
,NotRegex
,Exists
,DoesNotExist
- spec.logFilter.values
Array of values or regexes for corresponding operations. Does not work for
Exists
andDoesNotExist
operations.Fields a with float or boolean values will be converted to strings during comparison.
- spec.multilineParser
Multiline parser for different patterns.
- spec.multilineParser.custom
Multiline parser custom regex rules.
- spec.multilineParser.custom.endsWhen
It’s a condition to distinguish the last log line of the multiline log.
- spec.multilineParser.custom.endsWhen.notRegex
Regex string, which treats as match only strings that DON’T match the regex.
- spec.multilineParser.custom.endsWhen.regex
Regex string, which treats as match only strings that match the regex.
- spec.multilineParser.custom.startsWhen
It’s a condition to distinguish the first log line of multiline log.
- spec.multilineParser.custom.startsWhen.notRegex
Regex string, which treats as match only strings that DON’T match the regex.
- spec.multilineParser.custom.startsWhen.regex
Regex string, which treats as match only strings that match the regex.
- spec.multilineParser.type
Required value
Parser types:
None
— do not parse logs.General
— tries to match general multiline logs with space or tabulation on extra lines.Backslash
— tries to match bash style logs with backslash on all lines except the last event line.LogWithTime
— tries to detect events by timestamp.MultilineJSON
— tries to match JSON logs, assuming the event starts with the{
symbol.Custom
- tries to match logs with the user provided regex inspec.multilineParser.custom
field.
Default:
"None"
Allowed values:
None
,General
,Backslash
,LogWithTime
,MultilineJSON
,Custom