Well, I mean, this is pretty similar to muslc's approach of randomizing memory offsets on bootup of the binary. Only that there's a lot of obfuscation added on top.
Another mechanism could be using go routines to modify memory and pointers/references to a slice that is held by the main process, because that would add another layer of noise. You could also make it so that only a magic value in that memory leads to successful API calls, and the other values will just throw errors down the line which are ignored/stubbed. That would make effectively debugging it impossible.
And, of course, to use garble to obfuscate symbols. Maybe the goal of a project like this could be an "unexploitable" binary?
Has anyone compiled and used go/compile with muslc?
If IBM weren't hostile to the Hercules project and allowed local licensing to run z/OS, CICS, IMS and DB2 on it, perhaps more hobbyists would want to careerpath themselves on to the s390 architecture.
I do love the s390 arch and the massive IO hardware over there, but IBM has paywalled down entry so hard that there is no audience.
They even went to the trouble of making Go binaries transportable for direct execution under z/OS. But if you want new people to write code on the platform you need to make access to the platform a thing.
Former P-series/AIX SME here. I agree 100%. IBM's training programs are a joke.
P-series and mainframe machines have a lot of cool tech, and they're very resilient. They can even lose a CPU or some RAM and keep running. x86 systems would more likely freeze/crash immediately.
But the only reason I know P-series/AIX at all is because one small branch of IBM hired me for my linux skills back in 2011, and I learned on the job. But I quit after 5 years, because the pay wasn't sustainable. The machines are too expensive to play around with otherwise. If you learn by doing (which seems vital to be a good sysadmin or programmer), even a license to use AIX is out of the hobbyist's price range. Training courses are limited lab environments. You won't get nearly as much out of that as you would from a 12-month AWS subscription, or a $5/month VPS, or an x86 virtual machine, or a raspberry pi. etc, etc.
And IBM ended their developer machine licensing. So now employers can't even afford to maintain extra P-series machines for devs/sysadmins to play around with and learn.
But don't worry, IBM will keep shooting their feet off until they no longer exist. There will likely be a panic, similar to Y2K, where everyone's feverishly re-writing and porting and emulating and migrating things off of IBM iron and onto x86 machines.
That is a very good point. Why buy big iron "pets" when you can buy x86 "cattle"? It works for a lot of stateless apps and services. A node dies, and k8s quickly moves the workloads to new nodes.
It does take some work to accomplish this on more complex apps, though. Things like SQL databases and rabbitmq are very often single points of failure in practice. At smaller companies, it's often easier to stick them on more resilient hardware than to architect an active/active or active/failover system. I agree that this isn't the best way to do it, and IMHO any important service should have HA of some sort.
That said, I use x86 based machines for all my personal projects, and I wouldn't buy IBM systems if I owned a tech startup just because they're like 10x the price of x86.
IBM: Footgunning before the XT and before the Republican conformity dress code when it was cool to enable genocidal regimes for money. IBM is the quintessential big, dumb company that innovates in spurts and then is a perpetual loser because of all of its corporate bs.
And now that they bought RedHat, they're eagerly dropping support for all linux distros that are not RedHat, and they're attempting to lock out anyone who wants a personally affordable RedHat-like system (CentOS...). If they succeed, it may even be hard to find decent RedHat sysadmins in the future.
It's hard to keep up a trained workforce when they lock the doors so tight.
Yep. In the greedy drive to monetize the Meta's and Motorola's running Cent to force them onto RHEL, they're pulling a grenade pin and daring people to stay in the boat. Instead, they're alienating most of their future business of potential users and decision makers by attempting to emulate a worse than Oracle while disrespecting their users.
The value in something like Cent/RHEL is LTS stability. Really, the only valuable part of Cent/RHEL is the stable kernel because the userland rapidly becomes old and useless. Something like Ubuntu with a RHEL quality kernel with a 10 yr support lifecycle but without a separate foss/commercial version would be superior to either RHEL or Ubuntu. Canonical went full shark-jumping mickeymouse with their "Pro" subscription and non patching model unless you pay $.
The mainframe is not a special thing anymore, hasn't been since the late 90s. It's just a server box.
I work at a shop with a z/14. I would love it if we finished the last COBOL retirements and go back to the mainframe but this time to run container farms, fresh Go code, and use thr power to run way deeper matrices of tests that take days to run locally and cannot afford to run on AWS.
IBM could sell the future of the on premises z/xxx boxes as "datacenter in one rack"
Running x86 in z/VM has been a discussion for 25 years. Just fucking do it. Let people run whatever they want.
Just as people are excited about ARM for low-watt computing, make s390x just as exciting for people who want insane vertical resources but using the same dev tools that are used now for easy x86/ARM crossover.
But IBM culture has always been about overcharging a small and rich audience and now they are sitting around hocking their services cohosted thru AWS and everyone who has COBOL and 360ASM running are doing retirements with no plans to use the boxes after its all unloaded.
> Just as people are excited about ARM for low-watt computing, make s390x just as exciting for people who want insane vertical resources but using the same dev tools that are used now for easy x86/ARM crossover.
The funny thing is that AIUI the technology is already basically there. They actually did throw the resources into getting a lot of Open Source software to be compatible with s390x, they've got Linux LPARs and LinuxONE. My understanding is that they just... don't make any effort to sell, outside a tiny fraction of Enterprise™.
Performance per dollar and per watt of IBM machines are wholly out of step with the market. As such, there's no motivation to re-use those machines after migration off z/OS or other IBM solutions anyway...
even easier is to STOP HOSTING SSHD ON IPV4 ON CLEARNET
at minimum, ipv6 only if you absolutely must do it (it absolutely cuts the scans way down)
better is to only host it on vpn
even better is to only activate it with a portknocker, over vpn
even better-better is to set up a private ipv6 peer-to-peer cloud and socat/relay to the private ipv6 network (yggdrasil comes to mind, but there's other solutions to darknet)
your sshd you need for server maintenance/scp/git/rsync should never be hosted on ipv4 clearnet where a chinese bot will find it 3 secs after the route is established after boot.
How about making ssh as secure as (or more secure than) the VPN you'd put it behind? Considering the amount of vulnerabilities in corporate VPNs, I'd even put my money on OpenSSH today.
It's not like this is SSH's fault anyway, a supply chain attack could just as well backdoor some Fortinet appliance.
Defence in depth. Which of your layers is "more secure" isn't important if none are "perfectly secure", so having an extra (independent) layer such as a VPN is a very good idea.
You have to decide when to stop stacking, otherwise you'd end up gating access behind multiple VPNs (and actually increasing your susceptibility to hypothetical supply-chain attacks that directly include a RAT).
I'd stop at SSH, since I don't see a conceptual difference to how a VPN handles security (unless you also need to internally expose other ports).
OpenSSH has a much smaller attack surface, is thoroughly vetted by the best brains on the planet, and is privilege separated and sandboxed. What VPN software comes even close to that?
The only software remotely in the same league is a stripped down Wireguard. There is a reason the attacker decided to attack liblzma instead of OpenSSH.
I imagine it stops some non-targeted attempts that simply probe the entire v4 range, which is not feasible with v6. But yeah, not really buying you much, especially if there is any publicly listed service on that IP.
If you have password authentication disabled then it shouldn't matter how many thousands of times a day people are scanning and probing sshd. Port knockers, fail2ban, and things of that nature are just security by obscurity that don't materially increase your security posture. If sshd is written correctly and securely it doesn't matter if people are trying to probe your system, if it's not written correctly and securely you're SOL no matter what.
This does not matter either. The attack came in by loading into systemd via liblzma. It put on a hook and then sits around waiting for sshd to load in so it can learn the symbols then proceeds to swap in the jumps.
sshd is a sitting duck. Bifurcating sshd into a multimodule scheme won't work because some part of it still has to be loaded by systemd.
This is a web of trust issue. In the .NET world where refection attacks happen to commercial software that features dynload assemblies, the only solution they could come up with is to sign all the things, then box up anything that doesn't have a signing mechanism and then sign that, even signing plain old zip files.
Some day we will all have to have keys, and to keep the anon people from leaving they can get an anon key, but anons with keys will never get on the chain where the big distros would ever trust their commits until someone who forked over their passport and photos got a trustable key to sign off on the commits, so that the distro builders can then greenlight pulling it in.
Then I guess to keep the anons hopeful that they are still in the SDLC somewhere their commits can go into the completely untrusted-unstable-crazytown release that no instutution in their right mind would ever lay down in production.
any one of us if we sat on the OSSH team would flip the middle finger. What code is the project supposed to write when nothing on main dyn loaded liblzma. It was brought in from a patch they don't have realistic control over.
This is a Linux problem, and the problem is systemd, which is who brought the lib into memory and init'd it.
I think the criticisms of systemd are valid but also tangential. I think Poettering himself is on one of the HN threads saying they didn't need to link to his library to accomplish what they sought to do. Lzma is also linked into a bunch of other critical stuff, including but not limited to distro package managers and the kernel itself, so if they didn't have sshd to compromise, they could have chosen another target.
So no, as Pottering claimed, sshd would not be hit by this bug except for this systemd integration.
I really don't care about "Oh, someone could have written another compromise!". What allowed for this compromise, was a direct inability for systemd to reliable do its job as an init system, necessitating a patch.
And Redhat, Fedora, Debian, Ubuntu, and endless other distros took this route, because something was required, and here we are. Something that would not be required if systemd could actually perform its job as an init system without endless work arounds.
Also see my other reply in this thread, re Redhat's patch.
I just went and read https://bugzilla.redhat.com/show_bug.cgi?id=1381997 and actually seems to me that sshd behavior is wrong, here. I agree with the S6 school of thought, i.e. that PID files are an abomination and that there should always be a chain of supervision. systemd is capable of doing that just fine. The described sshd behavior (re-execing in the existing daemon and then forking) can only work on a dumb init system that doesn't track child processes. PID files are always a race condition and should never be part of any service detection.
That said, there are dozens of ways to fix this and it really seems like RedHat chose the worst one. They could have patched sshd in the other various ways listed in that ticket, or even just patch it to exit on SIGHUP and let systemd re-launch it.
I'm not the type to go out of my way to defend systemd and their design choices. I'm just saying the severity of this scenario of a tainted library transcends some of the legit design criticisms. If you can trojan liblzma you can probably do some serious damage without systemd or sshd.
Of course you can trojan other ways, but that can only be said, in this thread, in defense of systemd.
After all, what you're saying is and has always been the case! It's like saying "Well, Ford had a design flaw in this Pinto, and sure 20 people died, but... like, cars have design flaws from time to time, so an accident like this would've happened eventually anyhow! Oh well!"
It doesn't jive in this context.
Directly speaking to this point, patched ssh was chosen for a reason. It was the lowest hanging fruit, with the greatest reward. Your speculation about other targets isn't unwarranted, but at the same time, entirely unvalidated.
Why to avoid this? Well, it is adding more systemd-specific bits and new build dependency to something that always worked well under other inits without any problems for years.
They chose the worst solution to a problem that had multiple better solutions because of a pre-existing patch was the easiest path forward. That’s exactly what I’m talking about.
Tell us all, please, how the starting vector of this attack would affect statically compiled dropbear binary even with systemd's libsystemd pwnage? I am very cruious about your reasoning.
The fact, that the whole reason this library is even being pulled into the sshd daemon process, is some stupid stuff like readiness notification, which itself is utterly broken on systemd, by design (and thus is forever unfixable), and makes this even more tragic.
Don't put your head into the sand, just because of the controversial nature of the topic. Systemd was VERY accommodating in this whole fiasco.
Saddest part of all this is, that we know how to to do better. At least since Bernstein, OpenBSD and supervision community (runit/s6) guys solved it. Yet somehow we see same mistakes repeated again and again.
I.e. you fork and run little helper to write, or directly write a single byte(!), to notify supervisor over supervisor provided fd. It allows you to even privseparate your notifier stuff or do all the cute SELinux magic you need.
But that would be too simple, I guess, so instead we link like 10 completely unrelated libraries into sshd, liblzma being one of them, one of the most crucial processes on the machine. To notify supervisor that it's ready. Sounds about right, linux distros (and very specific ones at that).
Sshd should be sacred, nothing more than libc and some base cryptolibs (I don't remember whether it still needs <any>ssl even) it needs.
Another great spot to break sshd is PAM, which has no place doing there either. Unfortunately it's hard dep. on most linux distros.
Maybe sshd should adopt kernel taint approach: as soon as any weird libraries (ie everything not libc and cryptolibs) are detected in sshd proces it should consider itself tainted. Maybe even seppuku itself.
The exploit could be, probably, somehow doable without systemd. But it would be much, much harder though.
Don't try to obfuscate that very fact from the discussion.
The sd-notify protocol is literally "Read socket address from environment variable, write a value to that socket". There's no need to link in libsystemd to achieve this. It's unreasonable to blame systemd for projects that choose to do so. And, in fact, upstream systemd has already changed the behaviour of libsystemd so it only dlopen()s dependencies if the consumer actually calls the relevant entry points - which would render this attack irrelevant.
> Another great spot to break sshd is PAM, which has no place doing there either. Unfortunately it's hard dep. on most linux distros.
There are many things to hate about PAM (it should clearly be a system daemon with all of the modules running out of process), but there's literally no universe where you get to claim that sshd should have nothing to do with PAM - unless you want to plug every single possible authentication mechanism into sshd upstream you're going to end up with something functionally identical.
Yeah Goroutines are great. Then add something like WebRTC to your project that realistically tops out at 10000 listeners, and people wonder why Twitter Spaces is so buggy...
// The "-2" is included because the for-loop will
// always increment by 2. In this case, we want to
// skip an extra 2 bytes since we used 4 bytes
// of input.
i += 4 - 2;
From what I read on masto, the original maint had personal life breakdown, etc. Their interest in staying as primary maint is gone.
This is a very strong argument for FOSS to pick up the good habit of ditching/un-mainlining projects where they are sitting around for state actors to volunteer injecting commits to, and dep-stripping active projects from this cruft.
Who wants to maintain on a shitty compression format? Someone who is dephunting, it turns out.
Okay so your pirate-torrent person needs liblzma.so Offer it in the scary/oldware section of the package library that you need to hunt down the instructions to turn on. Let the users see that it's marked as obsolete, enterprises will see that it should go on the banlist.
Collin worked on XZ and its predecessor ~15 years. It seems that he did that for free, at least in recent times. Anyone will lose motivation to work for free over this period of time.
At the same time, XZ became a cornerstone of major Linxus distributions, being systemd dependency and loaded, in particular, as part of sshd. What could go wrong?
In hindsight, the commercial idea of Red Hat, utilizing the free work of thousands of developers working "just for fun", turned out to be not so brilliant.
On the contrary, this is a good example for why 'vulnerable' OSS projects that have become critical components, for which the original developer has abandoned or lost interest, should be turned over to an entity like RedHat who can assign a paid developer. It's important to do this before some cloak and dagger rando steps out of the shadows to offer friendly help, who oh by the way happens to be a cryptography and compression expert.
A lot of comments in this thread seem to be missing the forest for the trees: this was a multiyear long operation that targeted a vulnerable developer of a heavily-used project.
This was not the work of some lone wolf. The amount of expertise needed and the amount of research and coordination needed to execute this required hundreds of man-hours. The culprits likely had a project manager....
Someone had to stalk out OSS developers to find out who was vulnerable (the xz maintainer had publicly disclosed burnout/mental health issues); then the elaborate trap was set.
The few usernames visible on GitHub are like pulling a stubborn weed that pops up in the yard... until you start pulling on it you don't realize the extensive reality lying beneath the surface.
The implied goal here was to add a backdoor into production Debian and Red Hat EL. Something that would take years to execute. This was NOT the work of one person.
Um, what? This incident is turning into such a big deal because xz is deeply ingrained as a core dependency in the software ecosystem. It's not an obscure tool for "pirates."