Score:0

Aggregate multiple log files in a directory

ru flag

I have k3s single node cluster running on a machine. I do not have any logging infrastructure set up yet and I'd leave this as a future learning experience for now.

On that k3s I run some cron jobs which create logs for each of the jobs into a separate file. I can observe them in /var/log/containers/cron-job-* on the host machine. These logs disappear after a certain amount of time(successfulJobsHistoryLimit: 3). New job instances create new log files.

I'm unable to find a simple tool that could watch that logs directory, preferably with a file name pattern, and stream/join those small job logs into a single log file, including new files that are being created. I don't mind if file name is lost, I just want the log lines to end up in one file serving as an archive of all the job runs.

What I have considered?

I could just add a script to cat those files and append into a target file with an interval, but I'd have to keep track which files have already been inserted in case of the jobs get out of sync or the cron interval changes. Also I might like to extend this functionality for pods that are "long-running" and in this case I'd have to start tracking updated lines in logs.

All examples I have found deal with real-time tailing on screen, which Is not what I need. I kind of need multi-tailing into a target log file.

Any ideas? (I'd also accept some kind of simple Kubernetes logging hook example)

us flag
Rob
Running a syslog server is extremely trivial. Normally cron already logs to syslog and that syslog can be easily reconfigured to redirect/copy those messages to that central syslog server. That completely avoids merging log files
Klamber avatar
ru flag
@Rob Would you happen to know how to wire up k3s cron job logs sending to syslog?
Score:1
ru flag

I offer one solution which I have now chosen for myself. It is not the answer I was looking for,but one that seems to sail with the flow. I'm still curious if this could be handled by some common Unix command.

Anyway, here's what I did:

The common way seems to use a tool called Fluentd, which allows you to collect logs form a various sources and transport them to anywhere you see fit - kind of ETL for logs.

I chose to send the logs to syslog server since I already had one running, but you could choose any of the output plugins from here: Output plugins. There are also a large set of additional plugins: All plugins

Step 1

Get a Fluentd setup that has the remote_syslog plugin installed. That does not come with the official docker image, but you can set it up yourself.

FROM fluent/fluentd:v1.14.0-1.0
USER root

# https://github.com/fluent-plugins-nursery/fluent-plugin-remote_syslog
RUN fluent-gem install fluent-plugin-remote_syslog

USER fluent

Build the image and push to your registry.

Step 2

Next setup a Fluentd deployment manifest with read-only volume claims to access pod logs. The actual files reside in /var/log/pods/* and the /var/log/containers contains actually symlinks. We need actual files. Those logs are owned by root on the host machine and plain fluent user won't have access to read them. We need to set some security contexts. For the sake of getting things working I have used root group for the fsGroup. Feel free to dig deeper and find/comment the most optimal solution for this security wise.

apiVersion: apps/v1
kind: Deployment
...
spec:
  ...
    spec:
      securityContext:
        fsGroup: 0
      volumes:
        - name: varlogpods-pv
          persistentVolumeClaim:
            claimName: pvc-var-log-pods
            ...
      containers:
        - name: fluentd
          image: your/fluentd-image

See my full manifest in this gist: fluentd-deployment

Step 3

Before you deploy you also need to set up the fluent.conf and describe some rules there.

Mine is set to match a log line like this:

2022-04-26T20:05:00.847016854+03:00 stderr F time="2022-04-26 17:05:00" level=info msg="processing 3 records ..."

For more look into tail plugin: tail

    <source>
      @type tail
      @id in_tail_container_logs
      path "/var/log/pods/default_cron-*/*/*.log"
      pos_file "/tmp/cron_.log.pos"
      read_from_head true
      tag cron
      <parse>
        @type regexp
        expression /^(?<logtime>[^ ]*) .* level=(?<level>[^ ]*) msg="(?<message>[^"]*)"$/ 
        time_key logtime
        time_format %FT%T.%N%:z
      </parse>
    </source>
    
    <match cron>
     @type remote_syslog
     host 172.16.3.10
     port 514
     protocol udp
     severity info
     program "fluentd"
     hostname "k3sserver"
     
     <buffer>
     </buffer>
     
     <format>
      @type single_value
      message_key message
     </format>
    </match>

One important config attribute here is read_from_head true which reads log files from top. It is necessary for this scenario since the pod logs are rotating we want Fluentd to read the full pod log, not just few update lines in the end. For short cron job the log file just appears and tail wont report any initial lines in it.

Step 4

Fiddle with config and try, try again. Don't forget to restart your deployment after you have updated the config in configMap.

Few bits from search trail:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.