The point is that RedHat went on a tirade for years telling everyone: "Docker bad, root! Podman good, no root! Docker bad, daemon! Podman good, no daemon!".
And then here comes Quadlets and the systemd requirements. Irony at its finest! The reality is Podman is good software if you've locked yourself into a corner with Dan Walsh and RHEL. In that case, enjoy.
For everyone else the OSS ecosystem that is Docker actually has less licensing overhead and restrictions, in the long run, than dealing with IBM/RedHat. IMO that is.
But...you don't need systemd or Quadlets to run Podman, it's just convenient. You can also use podman-compose (I personally don't, but a coworker does and it's reasonable).
But yeah I already use a distro with systemd (most folks do, I think), so for me, using Podman with systemd doesn't add a root daemon, it reuses an existing one (again, for most Linux distros/users).
Today I can run docker rootless and in that case can leverage compose in the same manner. Is it the default? No, you've got me there.
SystemD runs as root. It's just ironic given all the hand waving over the years. And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point.
I've used Podman. It's fine. But the arguments of the past aren't as sharp as they originally were. I believe Docker improved because of Podman, so there's that. But to discount the reality of the doublespeak by paid for representatives from RedHat/IBM is, again, ironic.
> And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point
I would argue that Docker’s tooling is not well thought out, and that’s putting it mildly. I can name many things I do not like about is, and I struggle to find things I like about it’s tooling.
Podman copied it, which honestly makes me not love podman so much. Podman has quite poor documentation, and it doesn’t even seem to try to build actually good designs for tooling.
FROM [foo]: [foo] is a reference that is generally not namespaced (ubuntu is relative to some registry, but it doesn't say which one) and it's expected to be mutable (ubuntu:latest today is not the same as ubuntu:latest tomorrow).
There are no lockfiles to pin and commit dependency versions.
Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
Mostly resulting from all of the above, build layer caching is basically a YOLO situation. I've had a build result in literally more than a year out-of-date dependencies because I built on a system that hadn't done that particular build for a while, had a layer cached (by name!), and I forgot to specify a TTL when I ran the build. But, of course, there is no correct TTL to specify.
Every lesson that anyone in the history of computing has ever learned about declarative or pure programming has been completely forgotten by the build systems.
Why on Earth does copying in data require spinning up a container?
Moving on from builds:
Containers are read-write by default, not read-only.
Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
The tooling around what constitutes a running container is, to me, rather unpleasant. I can't make a named group of services, restart them, possibly change some of the parts that make them up, and keep the same name in a pleasant manner. I can 'compose down' and 'compose up' them and hope I get a good state. Sometimes it works. And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
> Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.
> Why on Earth does copying in data require spinning up a container?
It doesn't.
> Containers are read-write by default, not read-only.
I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.
> Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
Almost all of this is wrong.
> And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.
Pretty much everything you've outlined is, as I see it, a misunderstanding of what containers aim to solve and how they're operationalized. If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.
>> Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
> I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.
They're not so different. An environment is just big software. People have come up with schemes for building large environments for decades, e.g. rpmbuild, nix, Gentoo, whatever Debian's build system is called, etc. And, as far as I know, all of these have each layer explicitly declare what it is mutating; all of them track the input dependencies for each layer; and most or all of them block network access in build steps; some of them try to make layer builds explicitly reproducible. And software build systems (make, waf, npm, etc) have rather similar properties. And then there's Docker, which does none of these.
> > Containers are read-write by default, not read-only.
> I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.
Right. The issue is that the default is wrong. In a container:
$ echo foo >the_wrong_path
works, by default, using COW. No error. And the result is even kind of persistent -- it lasts until the "container" goes away, which can often mean "exactly until you try to update your image". And then you lose data.
> > Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
> Almost all of this is wrong.
I would really like to believe you. I would love for Docker to work better, and I tried to believe you, and I looked up best practices from the horse's mouth:
Look, in every programming language and environmnt I've ever used, even assembly, an interface has a name. If I write a function, it looks like this:
void do_thing();
If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.
At least the docs try to remind people that the whole mechanism is "insecure by default".
I even tried asking a fancy LLM how to export a port by name, and LLM (as expected) went into full obsequious mode, told me it's possible, gave me examples that don't do it, told me that Docker Compose can do it, and finally admitted the actual answer: "However, it's important to note that the OCI image specification itself (like in a Dockerfile) doesn't have a direct mechanism for naming ports."
> > And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
> What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.
I'd like to have some way for a developer to declare that their software can be run with the 'app' container and a 'mysql' container and you connect them like so. Or even that it's just one container image and it needs the following volumes bound in. And you could actually wire them up with different orchestration systems, and the systems could all read that metadata and help do the right thing. But no, no such metadata exists in an orchestration-system-agnostic way.
> If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.
Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.
> They're not so different. An environment is just big software.
Containers are not a software development platform, but a platform that can be used in the build phase of software development. They are very different. Docker is not inherently a software development platform because it does not provide the tools required to write, compile, or debug code. Instead, Docker is a platform that enables packaging applications and their dependencies into lightweight, portable containers. These containers can be used in various stages of the software development lifecycle but are not the development environment themselves. This is not just "big software" - which makes absolutely no sense.
> Right. The issue is that the default is wrong. In a container: $ echo foo >the_wrong_path
Can you do incorrect things in software development? Yes. Can you do incorrect things is containers? Yes. You're doing it wrong. If you are writing to a part of the filesystem that is not mounted outside of the container, yes, you will lose your data. Everyone using containers knows this and there are plenty of ways around it. I guess in your case you just always need to export the root of the filesystem so you don't foot gun yourself? I mean c'mon man. It sounds like you'd like to live in a software bubble to protect you from yourself at this point.
> If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.
You clearly don't understand Docker networking. What you're describing is the default bridge. There are other ways to use networking in Docker outside of the default. In your case, again, maybe just run your containers in "host" networking mode because, again, you're too ignorant to read and understand the documentation of why you have to deal with a port mapping in a container that's sitting behind a bridge network. Again you're making up arguments and literally have no clue what you're talking about.
> Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.
OK? Grab a dictionary - read the definition for the word: "subjective", enjoy!
> > They're not so different. An environment is just big software.
> Containers are not a software development platform, but a platform that can be used in the build phase of software development. They are very different. Docker is not inherently a software development platform because it does not provide the tools required to write, compile, or debug code.
You seem to be arguing about something entirely unrelated. GNU make, Portage, Nix, and rpmbuild also don’t provide tools to write, compile, or debug code.
> Can you do incorrect things in software development? Yes. Can you do incorrect things is containers? Yes. You're doing it wrong.
This is the argument by which every instance of undefined behavior in C or C++ is entirely the fault of the developer doing it wrong, and there is no need for better languages.
And yes, I understand Docker networking. I also understand TCP and UDP just fine, and I’ve worked on low level networking tools and even been paid to manage large networks. And I’ve contributed to, and helped review, Linux kernel namespace code. I know quite well what’s going on under the hood, and I know why a Docker container has, internally, a port number associated with the port it exposes.
What I do not get is why that port number is part of the way you instantiate that container. The tooling should let me wire up a container’s “http” export to some consumer or to my local port 8000. The internal number should be an implementation detail.
It’s like how a program exposes a function “foo” and not a numerical entry in a symbol table. Users calling the function type “foo” and not “17”, even though the actual low-level effect is to call a number. (In a lot of widely used systems, including every native code object file format I’m aware of, the compiler literally emits a call to a numerical address along with instructions so the loader can fix up that address at load time. This is such a solved problem that most programmer, even agency
assembly programmers, can completely ignore the fact that function calls actually go to more or less arbitrary numerical targets. But not Docker users — if you want to stick mysql in a container, you need to type in the port number used internally in that particular container.)
There are exceptions. BIOS calls were always by number, as are syscalls. These are because BIOS was constrained to be tiny, and syscalls need to work when literally nothing in the calling process is initialized. Docker has none of these excuses. It’s just a handy technology with quite poorly designed tooling, with nifty stuff built on top despite the poor tooling.
> Why is the port number part of the way you instantiate the container?
Because that’s how networking works in literally every system ever. Containers don’t magically "export" services to the world. They have to bind to a port. That’s how TCP/IP, networking stacks, and every server-client model ever designed functions. Docker is no exception. It has an internal port (inside the container) and an external port (on the host), again, when we're dealing with the default bridge networking. Mapping these is a fundamental requirement for exposing services. Complaining about this is like whining that you have to plug in a power cable to use a computer. Clearly your "expertise" in networking is... Well. Another misunderstanding.
> The tooling should let me wire up a container’s 'http' export to some consumer or to my local port 8000.
Ummmm... It does. It's called: Docker Compose, --network, or service discovery. You can use docker run -p 8000:80 or define a Docker network where containers resolve each other by name. You already don’t have to care about internal ports inside a proper Docker setup.
But you still need to map ports when exposing to the host because… Guess what? Your host machine isn't psychic. It doesn’t magically figure out that some random container process running an HTTP server needs to be accessible on a specific port. That’s why port mapping exists. But you already know this because "you understand TCP and UDP just fine".
> The internal number should be an implementation detail.
This hands-down the dumbest part of the argument. Ports are not just "implementation details." They're literally how services communicate. Inside the container, your app binds to a port (usually one) that it was explicitly configured to use.
If an app inside a container is listening on port 5000, but you want to access it on port 8000, you must declare that mapping (-p 8000:5000). Otherwise, how the hell is Docker (or anyone) supposed to know what port to use? According to you - the software should magically resolve this. And guess what? You don’t have to expose ports if you don’t need to. Just connect containers via a shared network which happens automagically via container name resolution within Docker networking.
Saying ports should be an "implementation detail" is like saying street addresses should be an implementation detail when mailing a letter. You need an address so people know where to send things. I'm sure you get all sorts of riled up when you need to put an address on a blank envelope because the mail should just know... Right? o_O
I feel like we're talking right past each other or something.
Of course every TCP [0] and UDP networking system ever has port numbers. And basically every CPU has calls functions with numeric addresses. And you plug in power cables to use a computer. Of course Docker containers internally use ports -- if I have a Docker image plus its associated configuration, and I instantiate it as a container, and it uses its internal port 8080 to expose HTTP, then it uses a port number.
But this whole conversation is about Docker's tooling, not about the underlying concept of containers.
And almost every system out there that has decent tooling has abstraction layers to make this nicer. In AT&T assembly language, I can type:
1:
... code goes here
and that code is called "1" in that file and is inaccessible from outside. If I want to call it from outside, I type something more like:
name_of_function:
... code goes here
with maybe a .globl to go along with it. And I call it by typing a name. And that call still calls the numeric address of that function.
If I plug in a power cable to use a computer, I do not plug it into port 3 on the back of the computer, such that accidentally plugging it into port 2 will blow a fuse. I plug it into a port that has a specific shape and possibly a label.
So, yes, I know that "If an app inside a container is listening on port 5000, but you want to access it on port 8000, you must declare that mapping (-p 8000:5000)", but that's not a good thing. Of course, if it's listening on port 5000, I need to map 8000 to 5000. But the fact that I had to type -p 8000:5000 is what's broken. The abstraction layer is missing. That should have been -p 8000:http or something similar.
And the really weird thing is that the team that designed Dockerfile seemed to have an actual inkling that something was needed here, which is why we have:
EXPOSE 8080
VOLUME ["/mnt/my_data"]
but they completely missed the variant that would have been good:
or whatever other spelling of the same concept would have passed muster.
And yes, Docker Compose helps, but that's at the wrong layer. Docker Compose is a consumer of a container image. The mapping from logical exposed service to internal port should have been handled at an abstraction layer below Docker Compose, and Compose and Quadlet and Kubernetes and the command line could all share that abstraction layer.
> ... service discovery. You can use docker run -p 8000:80 or define a Docker network where containers resolve each other by name. You already don’t have to care about internal ports inside a proper Docker setup
Can you point me at some relevant reference? Because, both in my experience and from (re-)reading the basic docs, all of the above is about finding an IP address by which to communicate with a relevant service, not about port numbers, let alone internal port numbers (which are entirely useless to discover from inside another container, because you can't use them there anyway). Even Docker Swarm does things like:
$ docker service create ... --publish published=8080,target=80
and that's another site, external to the container image in question, where one must type in the correct internal port number.
> I'm sure you get all sorts of riled up when you need to put an address on a blank envelope because the mail should just know... Right? o_O
I will take this the most charitable way I can. Sure, it's mildly annoying that you have to use someone numerical phone number to call them, and we all have contact lists to work around this, but that's still missing the target. I'm not complaining about how you address a docker container, and it makes quite a bit of sense that you need someone's phone number to call them. But if you had to also know that that particular phone you were calling had its microphone on port 83 and you had you tell your phone that their microphone was port 83 if you wanted to hear them and you had to change your contact list if they changed phone models, then I think everyone would be rightly annoyed.
So I stand by my assertion: Docker's tooling is not very good.
[0] But not every networking protocol ever. Even in the space of non-obsolete protocols, IP itself has no port numbers. And the use of a tuple (name or IP, port) is actually a perennial source of annoyance, and people try to improve it periodically, for example with RFC 2782 SRV records and, much more recently, RFC 9460 SVCB and HTTPS records. This is mostly off-topic, as these are about externally visible ports, and I’m talking about internal port numbers.
I don't see your point. This is exactly how Docker works. Containers that are running when instantiated from the Docker daemon don't need to be run as root. But you can... Just like your containers started from SystemD (quadlet).
I run all my containers, when using Docker, as non-root. So where is the upside other than where your trust lies?
Quadlets is systemd. Red hat declared it to be the recommended/blessed way of running containers. podman compose is treated like the bastard stepchild (presumably because it doesnt have systemd as a dependency).
Please try to understand the podman ecosystem before lashing out.
If that's what you're hoping for, Ukraine is very grateful for volunteers. There's a bunch of Swedes who did go there to fight. There's a bunch who are there right now.
Or is it "somebody else" who has to show a backbone and take action?
You are right that there are more ways to support. But demanding or asking that somebody else does something is not doing something. A majority of the population consider themselves to be great heroes for making symbolic gestures or for telling their friends over coffee or drinks that "somebody really should do something".
But reality is reality and they have done nothing. They will never do anything voluntarily either – for any cause.
I understand. But I don't know if there's been any lacking in condemnation on his part. As far as I remember, most leaders of nations have condemned the war completely since day one. What do you wish for him to achieve by condemning it harder?
> Compiling shaders directly from a high level representation to the GPU ISA only really happens on consoles.
No, that's not correct. In fact, it's mostly the other way around. Consoles have known hardware and thus games can ship with precompiled shaders. I know this has been done since at least PS2 era since I enjoy taking apart game assets.
While on PC, you can't know what GPU is in the consumer device.
For example, Steam has this whole concept of precompiled shader downloads in order to mitigate the effect for the end user.
> Consoles have known hardware and thus games can ship with precompiled shaders. I know this has been done since at least PS2 era since I enjoy taking apart game assets.
That's what I said. Consoles ship GPU machine code, PCs ship textual shaders (in the case of OpenGL) or some intermediate representation (DXIL, DXBC, SPIRV, ...)
> Engineering isn't about working on the most interesting problems. It's about getting stuff done and management happy
Truth is harsh, however this seems to be 100% accurate for nearly all cases of employment. Rarely do you get to focus on simply interesting problems and good engineering as a primary concern
Boredom is in the mind, not the task. Things aren't boring, people are. An important type of intelligence is the capacity to find what's interesting about a task that others lack the imagination to see. One needs to be able to create their own interesting solutions rather than expecting them to be handed down on a plate.
> find what's interesting about a task that others lack the imagination to see.
word of warning from an old guy, don't create problems for yourself when you don't have to. Turning something boring into something interesting can have painful consequences down the road.
Boredom is lacking stimulation. Even the most cutting edge task can grow boring if you need to plumb with it dozens of times over a year. Just because you can find what's unique doesn't mean it stimulates you. That's the exact issue with why neuro divergent individuals are demonize: they don't take as much interest with people as "normal people" would.
>One needs to be able to create their own interesting solutions rather than expecting them to be handed down on a plate.
> You should tell that to every old manager I had.
They sound like boreful people. If you find you have become boreful, there's a good chance you may be experiencing burnout or depression, which are nasty diseases, but still ones that afflict the subject, not the object. Nothing is boring but for the person who perceives it that way.
They have a job and family at the end of the day too. Not everyone has the power nor passion to try and and move a billion dollar corporation to care for people over profit.
>If you find you have become boreful, there's a good chance you may be experiencing burnout or depression,
I haven't has a full time job in over 18 months, so I hope I'm not burnt out.
>Nothing is boring but for the person who perceives it that way.
That's like saying nothing is ugly except for what people think is ugly. Boredom is personal and shaped by each person's experience, so I don't think framing it as a person for not trying hard enough is going to go too far in practice (nor is it productive to judge people based on how they prioritize their lives). Of course some things will be boring to one person and a life passion for another.We don't have enough time nor energy to try and find appreciation for every object and concept on Earth.
You gotta go into R&D if you want to focus on the fun stuff without the annoying plumbing. But such positions require an entirely different pipeline from getting a SWE position out of college.
A good way to secure your position is to be the go-to expert for a product with many years of life ahead of it.
Fixing stuff on a legacy product may make management happy but if that product is discontinued next year then you haven't accrued technical expertise valuable to the company (but you may have built a reputation as a fixer and quick learner).
So, as usual, it is a balancing act.
Edit: this is my perspective from the embedded world. It probably applies generally, though.
When times get tight the new projects get shitcanned and the 10 year-old cash cow design gets the promised new features.
One crusty project I worked on was a legacy control board for a piece of restaurant equipment. The customer, the company that built the actual machine, had been building this product for 40 years. It had been through two PCB redesigns and two different microcontrollers, but the logic was tried and true and had to survive. A port of the project from 6800 assembly to C had completely gone off the rails and the contractor was dumped. All it took was a 20-opcode fix to a routine that the contractor just couldn't grok.
I wouldn't say that's the conclusion. If there's only one true thing about work is: management doesn't care about you. They can fire you for any reason, and thinking that by working on stuff nobody else wants to work on you are "safe", it's an illusion.
If any the conclusion is: work on what you want, life is short.
Engineers have a special place in society like doctors and lawyers. Working with management is part of the job, but engineers have a professional ethical obligation to say no if they are asked to something against the public good.
The split there isn’t in favor of doing stuff that’s fun and novel though; actually, the engineer should usually pick a boring proven solution if the public has a high stake in the outcome.
> Engineering isn't about working on the most interesting problems. It's about getting stuff done and management happy.
That's a perfectly reasonable thing to want out of engineering for yourself. I wouldn't state it as an absolute truth for all people though.
Personally, I'd like to be working on something that extends the state-of-the-art a little, even if only by a tiny fraction. It can be one for the other disciplines involved - it doesn't have to be the software I'm writing that is responsible for that (and it usually isn't), but that's what I derive satisfaction from.
Do you think that software becomes poorly understood and maintained because the company treats it as a prestigious job and rewards people for working on it?
This is how I got laid off. Working on legacy software, sole person on the team, eventually management decided that it could be replaced by AI or some such pixie dust.
Legacy software with a single dev can be on the fast track to getting shut down. If it was still a business priority, they'd be throwing more resources at it.
That's a very narrow definition of engineering. And while it's not wrong, it's absolutely more of a "management" POV. Like sure, for management, engineering is mostly about what you said, but that's it.
systemd is at it's core an app for running services, such as containers.
You should read up on podman and systemd before making up more arguments.