Every software is different, yet all software does or at least should be able produce some form of diagnostic logs. Since service management software, such as init
(e.g. systemd) and docker cannot easily offer the best interface for every sort of software, you will often dumb it down to the least common denominator: Every program starts with stdin/stdout/stderr handles assigned index 0/1/2. On most systems following that convention, one can tell, through one way or another, a program that expect to open a file path to use /dev/stdout
(or more verbosely: /proc/self/fd/1
) and it will get what it already has at index 1 - the stdout handle it was equipped with when started.
And that seems to be what is happening here. Docker create some interface where it wants to accept or store your logs. In the simplest case, it would open a file - and instead of placing that file inside the container, it would just specify the open handle when deciding what stdout to start the container with. Now Nginx has that handle in its stdout slot, all that is missing is to clarify that it should write there. Where Nginx expects a file path, not a handle index, the slightly special path comes in, so that when Nginx works with that path like it would with any other regular file, it gets back what it was started with. This way docker does not need to know about Nginx internals, nor does Nginx need to know about docker internals. They can just do one of the most basix unix things there is: write lines of text to files.
Now this is not ideal, given that Nginx does speak a more expressive language than just files, so you could have its logs immediately partitioned into facilities and severity. I guess lines of text were good enough - or more reliable - for the purpose of that container.