I never questioned or thought twice about F-Droid's trustworthiness until I read that. It makes it sound like a very amateurish operation.
I had passively assumed something like this would be a Cloud VM + DB + buckets. The "hardware upgrade" they are talking about would have been a couple clicks to change the VM type, a total nothingburger. Now I can only imagine a janky setup in some random (to me) guy's closet.
In any case, I'm more curious to know exactly what kind hardware is required for F-Droid, they didn't mention any specifics about CPU, Memory, Storage etc.
A "single server" covers a pretty large range of scale, its more about how F-droid is used and perceived. Package repos are infrastructure, and reliability is important. A server behind someone's TV is much more susceptible to power outages, network issues, accidents, and tampering. Again, I don't know that's the case since they didn't really say anything specific.
> not hosted in just any data center where commodity hardware is managed by some unknown staff
I took this to mean it's not in a colo facility either, assumed it mean't someone's home, AKA residential power and internet.
If this is the hidden master server that only the mirrors talk to, then it's redundancy is largely irrelevant. Yes, if it's down, new packages can't be uploaded, but that doesn't affect downloads at all. We also know nothing about the backup setup they have.
A lot depends on the threat model they're operating under. If state-level actors and supply chain attacks are the primary threats, they may be better off having their system under the control of a few trusted contributors versus a large corporation that they have little to no influence over.
Even if it's just the build server, it's really hard to defend just having 1 physical server for a project that aspires to be a core part of the software distribution infrastructure for thousands of users.
The build server going down means that no one's app can be updated, even for critical security updates.
For something that important, they should aspire to 99.999% ("five nines of") reliability. With a single physical server, achieving five nines over a long period of time usually means that you were both lucky (no hardware failures other than redundant storage) and probably irresponsible (applied kernel updates infrequently - even if only on the hypervisor level).
Now... 2 servers in 2 different basements? That could achieve five nines ;)
> It makes it sound like a very amateurish operation.
Wait until you find out how every major Linux distributions and software that powers the internet is maintained. It is all a wildly under-funded shit show, and yet we do it anyway because letting the corpos run it all is even worse.
e.g. AS41231 has upstreams with Cogent, HE, Lumen, etc... they're definitely not running a shoestring operation in a basement. https://bgp.tools/as/41231
Yet most distros have maintainers build and sign their own package recipes and/or artifacts on their own random home workstations infected with who knows what so the trust is distributed (but not decentralized) which is the worst of all worlds. And that is for the ones that bother with maintainer signing at all, as distros like nix and alpine fully skip caring about bare minimum supply chain security.
Some distros do build on a centralized machine, but almost always one many maintainers have access to from their workstations, so once again any single compromised home computer backdoors everything.
The trust model of the linux distros that power most servers on the internet is totally yolo, without the funding to even approach doing build and release right, let alone code review. One compromised maintainer workstation burns it all to the ground.
Sorry if this ruins anyones rosy worldview. The internet is fragile as hell, and one bored teen away from another slammer-worm style meltdown.
Relevant context: I founded stagex exactly because no previous Linux distribution has a decentralized trust story appropriate for production use hosting public internet services.
Once you decentralize supply chain trust then the question of "which place and people people do we trust for the one holy server" totally goes away.
Once supply chain attacks enter your threat model, you suddenly realize that the entire internet breaks if any one of a few hundred volunteer owned home computers are compromised.
Fixing this requires universal reproducible builds redundantly built and signed by independently controlled hardware. Once you have that then you no longer have single points of failure so centralized high security colo cost becomes a moot issue.
I had passively assumed something like this would be a Cloud VM + DB + buckets. The "hardware upgrade" they are talking about would have been a couple clicks to change the VM type, a total nothingburger. Now I can only imagine a janky setup in some random (to me) guy's closet.
In any case, I'm more curious to know exactly what kind hardware is required for F-Droid, they didn't mention any specifics about CPU, Memory, Storage etc.