Score:12

Why is a 21.10 built binary not compatible with 21.04 install?

cn flag

I don't understand why a binary built on 21.10 is not compatible with an 21.04 system.

The binary is linked against libc.so.6 which is available on the 21.04 OS version as well.

Same binary, on the 21.10 system:

$ ldd turboledzd
    linux-vdso.so.1 (0x00007ffdc2595000)
    libhidapi-hidraw.so.0 => /lib/x86_64-linux-gnu/libhidapi-hidraw.so.0 (0x00007fdd64057000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fdd63e2f000)
    libudev.so.1 => /lib/x86_64-linux-gnu/libudev.so.1 (0x00007fdd63e06000)
    /lib64/ld-linux-x86-64.so.2 (0x00007fdd64085000)

And on the 21.04 system:

$ ldd turboledzd 
./turboledzd: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by ./turboledzd)
    linux-vdso.so.1 (0x00007fff9c570000)
    libhidapi-hidraw.so.0 => /lib/x86_64-linux-gnu/libhidapi-hidraw.so.0 (0x00007f37ec402000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f37ec216000)
    libudev.so.1 => /lib/x86_64-linux-gnu/libudev.so.1 (0x00007f37ec1ed000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f37ec423000)
    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f37ec1cb000)

My question:

If libc.so.6 from 21.04 is not compatible with libc.so.6 from 21.10, then why isn't the libc on 21.10 called libc.so.7 instead?

Or, better, why is it not linked against something called libglibc.so.2.34 - if that is a dependency?

N0rbert avatar
zw flag
Use 20.04 LTS if you want to get hassle-free long life for your application.
marcelm avatar
cn flag
_"Or, better, why is it not linked against something called libglibc.so.2.34"_ - How do you think that would help? In both cases the program won't run until you install an updated glibc.
cn flag
tl;dr: Yes it's a stupid mess. If it wasn't then it wouldn't be Linux. For comparison, imagine how dumb it would be if a program you compiled on Windows 10 20H2 didn't run on Windows 10 20H1.
cn flag
@user541686: It's not specific to Linux; it's a result of the shared AT&T heritage of Unix and Linux. Specifically, `libc` is treated as part of the OS. On Windows, `MSVCRT*.DLL` is part of Visual Studio, not Windows, and multiple versions can co-exist. Other compilers for Windows can similarly ship their own libraries. The Windows API itself is strongly versioned in its SDK, and has been since at least Windows 95.
Score:25
us flag

If libc.so.6 from 21.04 is not compatible with libc.so.6 from 21.10, then why isn't the libc on 21.10 called libc.so.7 instead?

libc.so is as core a library as they come. Nearly everything depends on it. One of glibc's goals is to provide backwards compatibility - a program that could run with an older libc.so.6 should (usually) also work fine with a newer release. However, if you bump the soname to libc.so.7 only because you added some new function, then all of these previously-built programs will need a rebuild for no good reason. There hasn't been a truly major break in API for glibc to warrant this yet.

I don't understand why a binary built on 21.10 is not compatible with an 21.04 system.

I don't see anybody guaranteeing forwards compatibility (which was what you expect by 21.04 being able to run something from 21.10) - why would you expect that if you don't take precautions to ensure it?

st flag
Glibc has been at library version 6 since as long as I have been doing Linux, if I remember correctly.
muru avatar
us flag
Since [1997, when the Linux libc/glibc split ended](https://man7.org/linux/man-pages/man7/glibc.7.html), it seems.
Bram avatar
cn flag
So, never build anything on a new system, then, if you want to distribute a binary? How can I use a new OS to build binaries that run on older OSes too?
Incomputable avatar
cn flag
@Bram you can stick with a feature set that the oldest library in your distribution OSes provides.
Bram avatar
cn flag
@Incomputable I don't use any new features. I just use a new OS to compile on.
Guntram Blohm avatar
cn flag
This is why I keep a 10+ years old Fedora 12 virtual machine around (could as well be Ubuntu 10.04, but I only switched to Ubuntu with 14.04); whatever small tool I create for my own usage gets compiled there, and works on every machine I copy it to, no matter how old or new it is.
Ruslan avatar
bv flag
@Bram yes, just don't compile it on a newer system than your oldest target, and also don't use GCC newer than that target supports, at least for C++ code, because `libstdc++.so` will have exactly the same problem.
Ruslan avatar
bv flag
@Incomputable it's not easy with glibc/libstdc++, because when linking, you automatically use the latest ABI these libraries support, like `GLIBC_2.34` in the OP, and this is required by the final binary to be present in the library it loads at startup. Maybe there's some linker magic to do otherwise, but I'm not aware of it.
cn flag
@Bram "How can I use a new OS to build binaries" - docker helps a lot in situations like this. You can use `ubuntu:20.04` image for example, map your source directory inside it and build the binaries there. Saves you installing a separate glibc / crosscompiling.
cn flag
@viraptor It'd be helpful to explain what one might do without Docker. This problem is old, yet Docker only came around in 2013.
cn flag
@user541686 without it the answer is painful and too long for the comments. "Google to learn about installing multiple toolchains and crosscompiling" is the shortest pointer.
capr avatar
cn flag
Linux folks just can't comprehend why anyone would want to build a binary that should work on an older system. And they had _decades_ to understand simple things like that and they still don't get it.
muru avatar
us flag
@capr and what's stopping you from doing so?
capr avatar
cn flag
@muru glibc's versioned symbols "feature" and gcc's lack of a "min-version" compatibility option (like OSX has).
muru avatar
us flag
@capr ah, but building for older versions is a solved problem - even 7-8 years ago when I dabbled in packaging, pbuilder and the like was already a mature workflow. These days with Docker it's even easier still. I'm not a macOS guy, so I don't know if macOS has equivalents for *those*.
capr avatar
cn flag
@muru the fact that you're recommending that I build with Docker, thus still on an older Linux just that now it's in a VM shows precisely the lack of understanding of the problem that I was talking about. On an older Linux you have an old gcc! That's not what you want at all. That's just a hack. It's the opposite of engineering.
muru avatar
us flag
@capr eh, with Docker, you can also easily make an image with your old glibc and new GCC. All I see here is a lack of interest in using well known, well understood solutions and instead simply whining when things aren't like $MY_FAVOURITE_OS
capr avatar
cn flag
@muru so to get back to your original question, what's "stopping" me from doing so is not wanting to put together a VM and building the entire compiler toolchain, and short of doing that, I'm "whining", understood.
Score:13
us flag

According to packages.ubuntu.com, 21.04 uses glibc 2.33, whereas 21.10 uses glibc 2.34, which are not completely compatible.

However, you should be able to build binaries for Ubuntu 21.04 from the source code.

Unless the source is interpreted, you usually need to build binary packages separately for different versions of Ubuntu. Launchpad can automate that for you.

why isn't the libc on 21.10 called libc.so.7 instead?

That is a decision only the developers of glibc can make.

cn flag
It is a decision that a distributor could make, I guess - but it would likely cause much more confusion and incompatibility than it would solve.... And if you truly want a system with multiple glibc versions, you probably need to have versioned ld-linux.so... which would need to be reflected in the binaries for such a system, making them incompatible with most anything else...
Score:2
ph flag

The term to google for is "glibc symbol versioning".

As this introduction explains, glibc contains multiple versions of each symbol that has changed over time and so libc.so.6 contains all glibc versions from 2.0 through to whatever version it says.

When you link a new library or binary against it, you're using the .h files and exported symbols for the newest versions of the symbols.

As for accessing the older symbols, there's a question over on StackOverflow named How can I link to a specific glibc version?, but because all your other dependencies are likely to be linking against the newest symbols too, it's much easier to just use Docker or a chroot to target older system versions because you'll probably wind up building one from scratch if you don't.

The Python devs actually maintain Docker containers named manylinux... specifically for establishing a reliable baseline for building wheels (redistributable binary packages) for Python packages with compiled components.

I believe the Windows approach is closer to bundling multiple clearly defined profiles and urging all authors of precompiled libraries to offer builds targeting older profiles. (With the caveat that you have to assume that stuff must be freed by the same compilation unit that malloc'd it because PE doesn't have global symbols and different libraries may depend on different versions of the allocator with their own static variables and semantic differences.)

cn flag
**Windows** indeed has multiple clearly defined profiels, and still documents them back to NT 4.0: https://docs.microsoft.com/en-us/cpp/porting/modifying-winver-and-win32-winnt . But it doesn't assume anything about `malloc/free`; those aren't Windows functions. Windows uses `HeapAlloc`/`HeapFree`, and it only requires that you use the same heap for both.
ph flag
@MSalters Regardless of whether you're using the Win32 API or the POSIX API, my point was that the Microsoft documentation I read took a very "If it breaks, you get to keep the pieces" attitude toward allocating in one build artifact and freeing in another... just don't ask me to find it again. It was WinXP era and Microsoft's reshuffled their URLs and re-themed MSDN since then.
cn flag
Microsoft's documentation on MSVC had those warnings - the compiler does implement `malloc`, of course.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.