But would that have solved anything here? The main maintainer was overwhelmed. The back door was obfuscated inside a binary blob there for so-called testing purposes. I doubt anyone was reviewing the binary blobs or the autoconf code used to load it in, and for that matter it’s not clear anything was getting reviewed. Fetching and building straight from GitHub doesn’t solve that if the malicious actor simply puts the binary blob into the repo.
Might not be a big chance depending on the project in question, but it's still tons more likely for someone randomly clicking through commits to find a backdoor committed to a git repo than within autogenerated text in a tarball. I click around random commits of random projects I'm interested in every now and then at least. At the very least it changes the attack from "yeah noone's discovering this code change" to "let's hope no random weirdo happens to click into this commit".
A binary blob by itself is harmless, you need something to copy it over from the build env to the final binary. So it's "safe" to ignore binary blobs that you're sure that the build system (which should all be human-written human-readable, and a small portion of the total code in sane projects) never touches.
That said, of course, there's still many options for problematicness - some projects have commit autogenerated code; bootstrapping can bring in a much larger surface area of things that might copy the binary blob; and more.
> At the very least it changes the attack from "yeah noone's discovering this code change" to "let's hope no random weirdo happens to click into this commit".
There's also value in leaving a trail to make auditing easier in the event that an attack is noticed or even if there is merely suspicion that something might be wrong. More visibility into internal processes and easier UX to sort through the details can easily make the difference between discovery versus overlooking an exploit.