Logging from multiprocess docker containers
Well, it was mentioned in the comments, but for reference - I find the best solution to docker
logging is generally rely on the 'standard' multi-system logging mechanisms - specifically syslog
as much as possible.
This is because you can either use the inbuilt syslogd on your host, or use logstash as a syslogd. It has an inbuilt filter, but actually that tends to suffer a bit from not being flexible enough, so instead I use a TCP/UDP listener, and parse the logs explicitly - as outlined in "When logstash and syslog goes wrong"
input {
tcp {
port => 514
type => syslog
}
udp {
port => 514
type => syslog
}
}
And then filter the log:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
}
syslog_pri { }
}
}
You can then feed this logstash to elasticsearch - either on a remote host, local container or what I'm doing now is a docker network
with a multi-node elasticsearch instance. (I've rolled my own using a download and docker file, but I'm pretty sure a standalone container exists too).
output {
elasticsearch {
hosts => [ "es-tgt" ]
}
}
The advantage here is - docker lets you either use --link
or --net
to specify a name of your elasticsearch container, so you can just alias the logstash config to point to the right location. (e.g. docker run -d --link my_es_container_name:es-tgt -p 514:514 -p 514:514/udp mylogstash
or just docker run --net es_net ....
)
The docker network
setup is slightly more convoluted, in that you need to set up a key-value store (I used etcd
but other options are available). Or you can do something like Kubernetes.
And then use kibana
to visualise, again exposing the kibana port, but forwarding onto the elasticsearch network to talk to the cluster.
But once this is setup, you can configure nginx
to log to syslog
, and anything else you want to routinely capture logging results. The real advantage IMO is that you're using a single service for logging, one which can be scaled (thanks to the networking/containerisation) according to your need.
End result is that the /dev/stdout for the cron job was pointed to the different device.
/proc/self/fd/1 and should have been /proc/1/fd/1 because as docker only expects one process to be running this is the only stdout it monitors.
So once I had modified the symlinks to point at /proc/1/fd/1 it should have worked however apparmor (on the host) was actually denying the requests (and getting permissions errors when echoing to /proc/1/fd/1) because of the default docker profile (which is automatically generated but can be modified with --security-opts).
Once over the apparmor hurdle it all works!
This said, after looking at what is required to be modified in apparmor to allow the required request I decided to use the mkfifo method as show below.
Dockerfile
FROM ubuntu:latest
ENV v="RAND-4123"
# Run the wrapper script (to keep the container alive)
ADD daemon.sh /usr/bin/daemon.sh
RUN chmod +x /usr/bin/daemon.sh
# Create the pseudo log file to point to stdout
RUN mkfifo /var/log/stdout
RUN mkfifo /var/log/stderr
# Create a cronjob to echo into the logfile just created
RUN echo '* * * * * root date 2>/var/log/stderr 1>/var/log/stdout' > /etc/crontab
CMD "/usr/bin/daemon.sh"
daemon.sh
#!/bin/bash
# Start cron
cron
tail -qf --follow=name --retry /var/log/stdout /var/log/stderr