Configuring Centralized logging with Graylog
Note: Graylog requires 4 cores and 8GB RAM
NOTE: I use Ubuntu Server 22.04 LTS which is currently not supported by MongoDB as libssl1.1 has been removed from 22.04.
There are 2 solutions to this:
1) Install libssl1.1 from the 20.04 LTS repo
2) Run mongo in a Docker container
Option 2 is the recommended choice until support for 22.04 LTS is officially added and that is what we will do here.
When official support is added docs will be updated.
To make configuration of all the required programs easier, we will set them all up in docker.
Step 1 - Install dependences
sudo apt update
sudo apt upgrade -y
sudo apt install docker docker-compose pwgen -y
Step 2 - docker-compose.yml
Create docker-compose.yml and add the following to it.
version: '3'
services:
mongo:
container_name: mongo
image: mongo:4.2
restart: unless-stopped
environment:
- TZ=Europe/Dublin
volumes:
- ./mongo_data:/data/db
networks:
- graylog
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
restart: unless-stopped
environment:
- TZ=Europe/Dublin
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Dlog4j2.formatMsgNoLookups=true -Xms512m -Xmx512m"
volumes:
- ./es_data:/usr/share/elasticsearch/data
ulimits:
memlock:
soft: -1
hard: -1
deploy:
resources:
limits:
memory: 1g
networks:
- graylog
graylog:
container_name: graylog
image: graylog/graylog:4.3.2
restart: unless-stopped
environment:
- TZ=Europe/Dublin
- GRAYLOG_PASSWORD_SECRET=${GRAYLOG_PASSWORD_SECRET}
- GRAYLOG_ROOT_PASSWORD_SHA2=${GRAYLOG_ROOT_PASSWORD_SHA2}
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
volumes:
- ./graylog_data:/usr/share/graylog/data
entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 -- /docker-entrypoint.sh
depends_on:
- mongo
- elasticsearch
networks:
- graylog
ports:
- 9000:9000 #Graylog web interface and REST API
- 1514:1514 # syslog TCP
- 1514:1514/udp #syslog UDP
- 12201:12201 #GELF TCP
- 12201:12201/udp #GELF UDP
volumes:
mongo_data:
driver: local
es_data:
driver: local
graylog_data:
driver: local
networks:
graylog:
driver: bridge
Step 3 - Generate graylog password secrets
Run the following command to generate a random password (-N) with 96 characters (-s 96)
pwgen -N 1 -s 96
Then run the command below to generate a SHA2 hash of a password that will be used as the admin password for graylog
echo -n "Enter Password: " && head -1 </dev/stdin | tr -d '\n' | sha256sum | cut -d" " -f1
NOTE: This is not the password I am using in production, this is just an example
Create a new file in the same folder as docker-compose called .env and add the password secret and password hash to it
Step 4 - Spin up containers
Graylog does not fetch its default config on first run for some reason. Run the following to download a default config file.
wget -P ./graylog_data/config/ https://raw.githubusercontent.com/Graylog2/graylog-docker/4.3/config/graylog.conf
Then start the containers
sudo docker-compose up -d
At this point if log server is running on internal proxmox network behind PfSense it will be necessary to open port 9000 on PfSense to pass logs to the log server.
Graylog should now be accessible on https://<logs-server VM IP or PfSense IP>:9000
Step 5 - Configure Graylog
From the login screen use admin and the password set earlier
Now go to System -> Inputs
Select Syslog UDP and click 'Launch New Input'
Click Global, add a title, and set port to 5514, the rest of the options can be left on default
Now click 'Show received messages' then 'Not updating' and change to '1 second'
Step 6 - Configure clients
SSH to a server to send logs from and create a new rsyslog config file
sudo nano /etc/rsyslog.d/100-log-server.conf
Then add the following line
*.* @log-server:1514;RSYSLOG_SyslogProtocol23Format
be sure to change log-server to the IP address if hostnames are not configured.
Then restart rsyslog
sudo systemctl restart rsyslog
You should now see logs coming through on Graylog





