Logs Monitoring
OpenTelemetry
Section titled “OpenTelemetry”Glouton embeds a log processing mechanism based on OpenTelemetry Collector. This allows defining log receivers based on either files, GRPC or HTTP listeners, process them with operators and filters, and finally export them to the Bleemeo Cloud Platform, from where they can be browsed.
Configuration
Section titled “Configuration”Enabling log processing can be done with the following setting:
log.opentelemetry.enable: trueYou can now specify additional log files and processing behavior; see the configuration examples below.
Enabling automatic discovery of services and containers logs can be done with:
log.opentelemetry.auto_discovery: trueOnce both enabled, logs from containers and discovered services will be gathered and sent to the Bleemeo Cloud Platform.
Log files
Section titled “Log files”To handle the logs from a file (or a file pattern),
you can define receivers in the log.opentelemetry.receivers section of the configuration.
For instance, to gather the logs of an application that writes them in the /var/log/my-app/ folder:
log.opentelemetry.receivers: my-app: include: - '/var/log/my-app/*.log'This simple configuration will make the application’s logs appear in the log section of the panel.
Operators
Section titled “Operators”To enrich/transform the logs, we can add operators that will process each line of log. Our operators are based on Stanza’s operators, so any operator defined in an OpenTelemetry Collector-Contrib configuration can be directly reused here:
log.opentelemetry.receivers: my-app: include: - '/var/log/my-app/access.log' operators: - type: add field: resource['service.name'] value: 'My app'Now, all logs handled by this receiver will have the resource attribute service.name set to “My app”.
The list of available operator types can be found here.
Additionally, we can use a known log format for the receiver:
log.opentelemetry.receivers: web-app: include: - '/var/log/nginx/access.log' log_format: nginx_access # default known log format my-app: include: - '/var/log/my-app/output.log' log_format: my_app_parser # custom known log formatWe also allow including one or more known log formats within operators, like:
log.opentelemetry.receivers: my-app: include: - '/var/log/my-app/access.log' operators: - type: add field: resource['service.name'] value: 'My app' - include: my_app_parser # <- here - type: remove field: attributes['some-attr']In any case, the format must be either one of the defaults we provide or defined like:
log.opentelemetry.known_log_formats: my_app_parser: - type: json_parser timestamp: parse_from: attributes.time layout: '%Y-%m-%dT%H:%M:%S.%L%z' layout_type: strptime severity: parse_from: attributes.level mapping: debug: 'DBG' info: 'INF' warn: 'WRN' error: 'ERR' fatal: 'FTL' - type: remove field: attributes['some-attr']In the above example, the timestamp and severity sections are optional settings of the json_parser operator.
Similarly, inside the severity setting, the mapping field is optional.
Log filters
Section titled “Log filters”To control which log records should be exported and which ones shouldn’t, filters can be applied with:
log.opentelemetry.receivers: my-app: include: - '/var/log/my-app/access.log' filters: log_record: # log records matching any item of this list will be dropped - 'IsMatch(body, ".*password.*")'or
log.opentelemetry.receivers: my-app: include: - '/var/log/my-app/access.log' filters: include: match_type: regexp resource_attributes: - key: 'some-attr' value: 'some value' ##### record_attributes: # - key: 'http.response.status_code' # This keeps logs that are either: value: '5..' # * an HTTP 5xx error - key: 'http.request.method' # * OR an HTTP POST or PATCH value: 'POST|PATCH' # * OR a message with severity level equal to information severity_texts: # - 'info' # ##### severity_number: min: 'INFO' match_undefined: true bodies: - 'log with status_code = 5\d\d' # When writing regexps, prefer using single quotes which save us from escaping backslashes exclude: # same structure as includeA few notes about filters:
- the rules declared in the
log_recordlist must be written using the OpenTelemetry Transformation Language - the value of the
include.match_typeandexclude.match_typefields can be whetherstrict(exact equality of value) orregexp - when defining multiple conditions using the
includeandexcludesettings (e.g., bothresource_attributesandbodies), a log record matching any of them will be considered as “included” or “excluded” - same behavior inside the
resource_attributes,record_attributes,severity_textsand bodiessections: entries are considered as OR conditions - if both
includeandexcludeare specified,includefiltering occurs first
Filters can also be applied globally with global filters.
Containers
Section titled “Containers”When log processing is enabled, Glouton automatically gathers logs from containers.
Log formats
Section titled “Log formats”Specifying a log format for a container can be done using two ways:
-
By using container labels or pod annotations:
docker run --label glouton.log_format=my_app_parser [...]See this section for more information about labels and annotations.
-
If you can’t set container labels or pod annotations, you can use the container_format setting, with the container name:
log.opentelemetry.container_format:my-app-ctr: my_app_parser
Log filters
Section titled “Log filters”Known filters can be used the same way, with the label or annotation:
docker run --label glouton.log_filter=my_app_filter [...]Similarly, a mapping between the container name and the known filter to apply can be defined
using the container_filter setting:
log.opentelemetry.container_filter: app-01: app-filterFilters can also be applied globally with global filters.
For example, setting up a filter to only keep logs from a given container can be done with:
log.opentelemetry.global_filters: include: match_type: strict record_attributes: - key: 'container.name' value: 'app-01'glouton.log_enable label
Section titled “glouton.log_enable label”Disabling the log processing for a given container or enabling it even when the automatic discovery isn’t active can be done with the following label:
docker run --label glouton.log_enable=false [...]docker run --label glouton.log_enable=true [...]Services
Section titled “Services”Specifying formats and filters to use for a specific service type can be done by referencing a known log format and a known log filter, like:
service: - type: 'my-app' ... log_format: 'my_app_parser' log_filter: 'my_app_filter'Handling specific log files can be done with:
- type: "my-app" ... log_files: - file_path: '/var/log/my-app/access.log' log_format: 'my_app_access_parser' log_filter: 'my_app_access_filter' - file_path: '/var/log/my-app/error.*.log' log_format: 'my_app_error_parser' log_filter: 'my_app_error_filter'GRPC / HTTP
Section titled “GRPC / HTTP”Glouton supports receiving logs from both GRPC and HTTP. See the GRPC and HTTP config sections for configuration details.
Filtering can be done by using global filters.
Fluentd
Section titled “Fluentd”Glouton can read the logs from your applications and generate metrics from the number of lines matching a pattern.
Installation
Section titled “Installation”Glouton uses Fluent Bit to collect the logs on your server.
You can install it as Linux package, a Docker container or a Helm chart on Kubernetes.
Linux Package
Section titled “Linux Package”To support logs monitoring, you need to install the package
bleemeo-agent-logs. It will install Fluent Bit and configure it to talk with
your agent.
On Ubuntu/Debian:
sudo apt-get install bleemeo-agent-logsOn CentOS/Fedora:
sudo yum install bleemeo-agent-logsDocker
Section titled “Docker”First, you need to create the Fluent Bit configuration.
sudo mkdir -p /etc/gloutoncat << EOF | sudo tee /etc/glouton/fluent-bit.conf[SERVICE] HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_PORT 2020
@INCLUDE /var/lib/glouton/fluent-bit/config/fluent-bit.confEOFThen you can run the Fluent Bit container.
sudo mkdir -p /var/lib/glouton/fluent-bit/configsudo touch /var/lib/glouton/fluent-bit/config/fluent-bit.confdocker run -d --restart=unless-stopped --name bleemeo-agent-logs \ -v "/etc/glouton/fluent-bit.conf:/fluent-bit/etc/fluent-bit.conf:ro" \ -v "/:/hostroot:ro" -v "/var/lib/glouton:/var/lib/glouton" \ -l "prometheus.io/scrape=true" -l "prometheus.io/port=2020" \ -l "prometheus.io/path=/api/v1/metrics/prometheus" \ kubesphere/fluent-bit:v2.0.6 --watch-path /var/lib/glouton/fluent-bit/configKubernetes
Section titled “Kubernetes”On Kubernetes, Fluent Bit can be installed with the official Helm Chart.
Create the file values.yaml with the following content:
image: repository: kubesphere/fluent-bit tag: v2.0.6
podAnnotations: prometheus.io/scrape: "true" prometheus.io/port: "2020" prometheus.io/path: "/api/v1/metrics/prometheus"
config: service: | [SERVICE] HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_PORT 2020
@INCLUDE /var/lib/glouton/fluent-bit/config/fluent-bit.conf inputs: "" filters: "" outputs: ""
args: - --watch-path - /var/lib/glouton/fluent-bit/config
daemonSetVolumes: - name: hostroot hostPath: path: / - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: lib hostPath: path: /var/lib/glouton/fluent-bit
daemonSetVolumeMounts: - name: hostroot mountPath: /hostroot readOnly: true - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: lib mountPath: /var/lib/glouton/fluent-bitThen add the Helm repository and install the chart.
# Create the initial empty configuration.sudo mkdir -p /var/lib/glouton/fluent-bit/configsudo touch /var/lib/glouton/fluent-bit/config/fluent-bit.conf# Install the Helm chart.kubectl create namespace fluent-bithelm repo add fluent https://fluent.github.io/helm-chartshelm upgrade --install -f values.yaml -n fluent-bit fluent-bit fluent/fluent-bitConfiguration
Section titled “Configuration”Glouton can create metrics by applying a regular expression to the log lines. The metrics created correspond to the number of times your regular expression matched a line per second.
The logs can be given by path, by a container name, or by container labels or annotations.
log: inputs: # Select the logs by path, container name or labels. # path: # container_name: # container_selectors:
# List of metrics to generate. filters: # Generated metric name. - metric: apache_errors_rate # Ruby regular expression to apply on each log line. # For testing purposes you can use the following web editor # to test your expressions: https://rubular.com/. regex: \[error\]Select Logs by Path
Section titled “Select Logs by Path”If your application is running directly on the host, you can select the logs directly by path.
log: inputs: # Apache errors per second. - path: /var/log/apache/access.log filters: - metric: apache_errors_rate regex: \[error\]Select Logs by Container Name
Section titled “Select Logs by Container Name”If your application is running in a container with a stable name, you can select its logs by the container name.
log: inputs: # Redis logs per second by container name. - container_name: redis filters: - metric: redis_logs_rate regex: .*Select Logs by Container Labels
Section titled “Select Logs by Container Labels”You can select the log of a container by labels or annotations.
log: inputs: # UWSGI logs count by pod labels. - container_selectors: component: uwsgi filters: - metric: uwsgi_logs_rate regex: .*