Score:0

How to deal with outdated Docker images?

cn flag
mnj

Docker images are built at some point in time and later they are fetched by the users of those images, and they create containers based on them. Seeing how often updates show up on Linux distros (like Ubuntu), doesn't it mean that the images are outdated pretty much the day after they get published to image repository?

Let's say someone creates an image for their app:

FROM ubuntu
RUN apt update -y # or whatever the command to update on Ubuntu is

WORKDIR /myapp
COPY ./* ./
# and some other stuff

The app gets built one day (using the ubuntu:latest from that day), all the latest patches are applied thanks to the RUN apt update -y, and the image gets pushed to a Docker repository. Now, what if next day some critical update is published on Ubuntu's apt repo (like some openssl patch). What about my app? Is it unsafe now until I manually decide to rebuild the image and push it again? Maybe we actually shouldn't care about the image/container being outdated and only worry about the host running the containers being up to date? If so, why?

in flag
Short answer is "yes", but a more accurate way of looking at this would be "Docker containers are expected to be ephemeral. If you always need the latest and plan on updating the containers as though they were virtual machines or dedicated hardware, then Docker is not the droid you're looking for." One of the nice features of Docker containers is their ability to become stale, as it allows for older bugs/exploits to be investigated. Stale containers can also save people who have *very* old databases on a failed system and no backups
pzkpfw avatar
us flag
If your containers survive for more than 24h without being rebuilt and you are worried about them missing out on security updates, why not just run `apt-get update && apt-get upgrade -y` from within the container? You could do this via cron, or by some external mechanism. Also, `apt update -y` does **not** upgrade your packages, it simply refreshes what packages are available.
mnj avatar
cn flag
mnj
Right, I don't use Ubuntu daily, so I forgot that it's `upgrade` :)
Score:1
us flag

Instead of posting comments, I'll summarize my thoughts in an answer:

[are] the images (...) outdated pretty much the day after they get published to image repository

Yes, but only to the extent this is also true for any server on any operating system and platform. Arguably the window of vulnerability is on average probably shorter for containers since they upgrade their base OS layer more often than a "normal" server would, but this does not mean they should be ignored, and this is of course not always true -- it's easy to imagine that there are VMs out there that are patched more often than other containers are re-deployed.

If you install a regular Ubuntu VM and leave it for 30 days, you will most likely have un-applied security fixes. The same goes for a container that stays deployed for 30 days, which means that in both cases, you should have patching procedures in place if you do not otherwise ensure the OS level stays up to date (for example, by re-deploying the entire container/vm from a new OS image).

What about my app? Is it unsafe now until I manually decide to rebuild the image and push it again?

This depends of course on what the dependencies of your application are. The degree of which your application is "unsafe" relates to how critical the hypothetical security-fixed package is for your application.

If you are running a java application that is using log4j and there's a critical vulnerability which is fixed in the Ubuntu repos, you are equally vulnerable regardless of whether you're running on a container or a VM until you update that package -- whether that is done via apt-get or re-deploying the entire vm/container from a new OS image that includes the new package is not the important part.

The fundamental question "is my application safe or affected by vulnerabilities in outdated packages" is not really Docker or even Ubuntu specific, the issue is the same everywhere except for the one case that clearly does differ between VMs and containers, which is that all containers share a kernel with their host, so a kernel vulnerability can't be patched from within a container but is directly dependent on the kernel version running on the host where the container is deployed.

Score:0
vn flag

Main points:

  1. Most container images are updated regularly to include updates, including security updates.

  2. Containers aren't as vulnerable as the base system, because:

    • They run in an isolated environment
    • They use only a subset of packages
    • The kernel still belongs to the base system
  3. If you're actually worried about a particular image, rebuild it yourself. The Dockerfile is likely to be found on Github or similar.

In general, I wouldn't worry too much about this. On the other hand, if you're actually relying on a third party building providing your images, as always only trust reputable sources.

For production use, it's generally recommended always to build your own images, for the exact same reasons.

pzkpfw avatar
us flag
Completely incorrect. A vulnerable library inside of a container still means your application is vulnerable, the fact that vulnerability is contained is irrelevant. Your point only applies to **kernel** vulnerabilities since all containers share a kernel by definition, but containers should still either be re-built or updated just like any other system to get OS level security fixes when they are a part of the container itself, as libraries are because that's the reason containers exist in the first place.
Hi-Angel avatar
es flag
@pzkpfw I don't see what you're disagreeing with. Artur didn't claim a vulnerability in a lib is irrelevant in containerized environment. Instead Artur literally said, I'm quoting "Containers aren't as vulnerable". That doesn't imply you don't have to pay attention to security bugs in libs you're using. Only that the harm those may cause might be *(or might not, it depends)* less serious.
pzkpfw avatar
us flag
The question is explicitly related to Ubuntu updates, which includes *any* updates, not only kernel updates. So to answer the question "is not running Ubuntu updates on a container bad" with "well, containers aren't as vulnerable" makes no sense. It implies that skipping updates is less bad than in a non-containerized environment which is not true. You also imply most containers host applications that are somehow isolated from the rest of the environment to a larger extent than "normal" servers which is also not the case by definition, it can be true or false.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.