Troubleshoot
Agent Filesβ
Configurationβ
Agent reads its configuration files from:
/etc/glouton/glouton.conf
/etc/glouton/conf.d/*.conf
etc/glouton.conf
etc/conf.d/*.conf
C:\ProgramData\glouton\glouton.conf
C:\ProgramData\glouton\conf.d
Default installation creates the following files:
/etc/glouton/glouton.conf
: common default configuration and description of some customizable option./etc/glouton/conf.d/05-system.conf
: default option for integration with system. For example it includes syslog logger./etc/glouton/conf.d/30-install.conf
: credentials used to communicate with Bleemeo Cloud platform.
For more details on configuration files, see Configuration
Diagnostic pageβ
Bleemeo agent have a built-in web server, which provide a diagnostic page, available by default at http://localhost:8015/diagnostic
This page may helps you finding the issues you had.
Diagnostic Archiveβ
A diagnostic archive, which contains more details, including recent debug log messages is available. This archive is primarily targeted for Bleemeo support or Bleemeo agent developers.
To retrieve the diagnostic.zip archive locally, you can do the following command:
TARGET_HOST={user}@{ip-of-your-server}
ssh $TARGET_HOST sh -c "'curl http://localhost:8015/diagnostic.zip || wget -O- http://localhost:8015/diagnostic.zip'" > diagnostic.zip
Logsβ
In case of trouble, the most valuable source of information is the log file.
Log messages may be at various locations, depending how you run the agent:
- On Linux, when agent is installed with package or with the standard method,
logs are in syslog (usually
/var/log/syslog
or/var/log/messages
). You can also usejournalctl -u glouton -f
to see the last logs. - On Windows, logs are usually in
C:\ProgramData\glouton\logs
. - Using
docker logs
for Docker images
The log destination is set in the configuration files. For example in case of
system installation, logging is setup in /etc/glouton/conf.d/05-system.conf
:
logging:
output: syslog
For more details, add the following to your configuration
(/etc/glouton/conf.d/90-custom.conf
) to increase log level to DEBUG:
logging:
level: DEBUG
After the configuration change, the agent will reload automatically. You can
force it to restart with systemctl restart glouton
or
docker restart glouton
.
Duplicated agentβ
On each server, the agent should use its own credentials. If an agent detects that another agent is already connected to the Bleemeo Cloud platform using the same credentials, it will stop sending metrics to avoid overwriting data. It will also log an error message and send an email to the account managers to notify them of the problem.
To detect that another agent is using the same credentials, the agent checks if some server properties (like the FQDN, MAC address or even the Bleemeo agent PID) have been modified on the Bleemeo API not by the agent itself. The agent will say in its logs which properties changed.
There are three main cases where this can happen:
- Two agents are running on the same host, in this case you should stop one of them.
- You are migrating a server to a new hardware, you should follow our migration guide.
- You copied the state to another server (possibly because you cloned a server through AMI or server image creation). On the server where the state was copied, you should stop the agent, then remove both state files, and restart the agent.
If you want to create a cloud image, please follow the installation for cloud image creation.
Stateβ
The agent is stateful. It keeps in a state some information specific to the server which runs it. For example its registration ID, the metrics seen and the metrics registered with the Bleemeo Cloud platform.
There are two state files:
- a static state file that
stores static information on the agent (like its credentials for the Bleemeo
Cloud platform). It is usually stored in
/var/lib/glouton/state.json
. - a cache state file
that stores the cache of the detected services and metrics. It is usually
stored in
/var/lib/glouton/state.cache.json
.
Memory lockingβ
Glouton needs credentials to gather the metrics of some software (e.g., PostgreSQL or vSphere), and for keeping them safe, they're stored in locked memory. As a consequence, the lockable memory limit set for the Glouton process should be large enough to handle that behavior. A limit of 8MB should be enough for most use cases.
If the current limit isn't enough, a warning message like this one will be logged:
The amount of lockable memory (64 kB) may be insufficient, and should be at least 192 kB.
On Unix systems, ulimit
can be defined with:
- systemd: add
LimitMEMLOCK=8M:8M
to the[Service]
section of the unit file - docker: add the option
--ulimit memlock=8388608:8388608
to thedocker run
command - docker compose: add the following section to the Glouton service:
ulimits:
memlock:
soft: 8388608
hard: 8388608