Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Carefully but Purposefully Oxidising Ubuntu (jnsgr.uk)
35 points by simskij 10 months ago | hide | past | favorite | 49 comments


I don't use but do like Rust. It's a step in the right direction. And yet, I'll be happy to stay on Debian with C coreutils...

> These utilities are at the heart of the distribution - and it’s the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me.

What safety issues were found in coreutils in recent memory? I'd be willing to bet that this transition will not solve more problems than it creates.

That said, I'd love for `fd` and `rg` and similar to be included in the standard list-of-things-available-anywhere. It's cool they're now in Debian-stable and don't have to be installed through whatever hacks!


I don't buy that uutils will be a meaningful improvement for security. Though replacing GPL licensed coreutils with a MIT licensed alternative is not a good look for Ubuntu's support of free software.


> I don't buy that uutils will be a meaningful improvement for security

Why? Because it hasn't been a consistent major vector for attack in the last couple of decades?


It could be argued that it's not a good look for Ubuntu's support of copyleft, but MIT license _is_ free software, as per the FSF definition of free software.


FSF defined “free software” licenses other than GPL are “free for me, fee for thee”.


If the goal is safety, rewriting ancient battle-tested software is not the way to do it.

The Lindy Effect for software: the longer software appears to be bug-free, the more likely it is bug-free.


With how often severe CVEs pop up years or even decades later, that is an unwise line of thinking.

In 2020, iOS suffered a zero day exploit based on font library code from the 90s. A bug that sat dormant for 30 years.

As far as rewriting / rethinking core utilities, I'm glad that for example `doas` and `sudo-rs` exist to beef up root authentication.

https://googleprojectzero.github.io/0days-in-the-wild/0day-R...


True, but there is no reason that those battle-tested cases cannot be migrated into other projects by a mindful maintainer / author (not to say that language-specific issues cannot arise, yet rust vs C seems like a pond vs ocean scenario).


"with an increase in security comes an increase in overall resilience of the system"

This isn't within my realm of expertise.. but I find it a bit hard to believe that GNU Coreutils is a source of a lot of security and resilience issues with Ubuntu.. Is this true?

Does anyone in the know, know if Ubuntu has any answer to the Nix explosion?

I feel my Ubuntu problems are always weird package issue. The system doesn't make it very transparent to the user how to make your own packages, or how to edit and rebuild an existing package

By contrast I've never had `ls`, `cp`, or `mv` freak out on me..

(I don't personally use Nix b/c I love the stability of Ubuntu LTS. Everyone and their mom makes sure their software can run on the latest LTS)


Their answer to Nix is snaps, which will never be the right answer to anything. They're stuck in an old mode of thinking and missed the boat.

NixOS or something very much like it is the future. I personally won't go back to the snarled mess of state that is traditional distros like Ubuntu.


NixOS builds on efforts of things like https://cloud-init.io/ no? Or was one before the other?

Its not a snarled mess if you understand what you are doing, imo. Though, when I first started using operating systems other than Windows (~1998), I was very confused, and made many of the same mistakes new Linux users make. Actually, way worse, as there were no resolution to my mistakes (using linux, was a big one back in the day if you were on newer hardware).

I understand where the sentiment comes from. I just don't appreciate the conclusion or the leech-like entitlement of the community. We used to fix and push.


Nix has been around since 2003, NixOS just takes that to its logical conclusion. cloud-init may have been inspired by it (I don't know), but certainly not the other way around.

By snarled mess of state, I don't just mean the way it works on first install, I mean the bundle of mud that imperatively managing a system inevitably turns into, with bits of this and that config left behind.

Try playing around with a few different window managers to feel the pain. NixOS makes it easy to try out new config and revert to the old config without muss or fuss.


Debian definitely has packaging docs https://www.debian.org/doc/manuals/debmake-doc/index.en.html and it's not that hard to customize an existing package (at least I found it easier than using rpmbuild...).


> Nix explosion

I am reasonably sure that you can, not that you need to, use nix with Ubuntu?

I use Ubuntu (and other distros) because I respect the effort behind it.

Why does one need to stop for the other to exist?


Nix on Ubuntu would be using a parallel set of libraries/dependencies. They won't be linked on top of the stable versions of packages as they've been selected in the LTS

I actually don't really know how that interplays with the rest of the system. If I build say... Konsole in Nix.. It seems like there would be no way for it to cleanly integrate with the rest of my KDE Desktop

The normal Nix versions numbers aren't stable. They have a stable branch but I don't think it lasts too long. And the "pinned" versions aren't as widely supported as Ubuntu LTS versions


yeah, https://nixos.org/guides/how-nix-works/

> parallel set of libraries/dependencies

snaps do the same thing? kind of the whole idea?

> If I build say... Konsole in Nix.. It seems like there would be no way for it to cleanly integrate with the rest of my KDE Desktop

See previous link


Maybe I missed it... But That doesn't explain how it integrates with the host system (other than it doesn't ?)

It also doesn't explain how your PATH is maintained. When you type 'pwd' into the terminal, which version from the nix store is used? There must be a master copy or master symlink. You're not typing hashes all the time

Snaps have their issues, but if you run 'firefox' there is only one snap. The versions are not guaranteed to play nice with the distro app versions though. That's kind of why they can't snap everything

PS: There are actually nuances that it seems Nix doesn't handle clearly. If libA requires libB and libC. LibB and libC both require a libD at different version numbers - then you can't safely link two libD versions(short of name mangling). You need to build libB and libC at some version of libD that will work for both. Maybe you specific a set of workable dependency versions and let the system resolve them ? Seems very messy. Ubuntu just gives you the version numbers and you need to patch the code to work with them


Ubuntu (via Debian) already had a system for this in `update-alternatives`, which is almost certainly already installed on this person's system.

Would it be possible to toggle coreutils/uutils via that instead of reinventing it? They're both effectively symlinks-managers.


They said elsewhere that they're not doing that yet because it requires coordination with the other package, and they don't want to bother until they're more confident that this doesn't utterly break things.


> `update-alternatives`

Also has at least one rewrite in rust:

https://github.com/Gregory-Meyer/update-alternatives


The alternatives system requires both packages to participate. In this case, a better alternative would be the diversions.


Unfortunately,

https://www.debian.org/doc/debian-policy/ap-pkg-diversions.h...

> Do not attempt to divert a file which is vitally important for the system’s operation - when using dpkg-divert there is a time, after it has been diverted but before dpkg has installed the new version, when the file does not exist.

I doubt that diversions are safe for coreutils. Now personally I'd like to know if that could be fixed instead of making a new tool, but /shrug


As long as the tools work the same way as GNU coreutils, I mostly don’t care which language they’re written in.

One reason I never adopted tools like exa or bat is that the first deviates from ls flags, and the second changes cat’s buffering behavior. The yield from changing the behavior of these fundamental tools isn’t worth the cost.

Also, if memory safety and security—not just performance—are the reasons to rewrite them in Rust, we could just as well write them in Go and get better community engagement. These are mostly CLI tools, so Go’s performance would be more than sufficient. I don’t really buy the Rust evangelism here.


Go's insistence on not using libc unless you have cgo enabled seems somewhat problematic from a distro maintainer perspective.


I think more people might adopt this language if there weren't such in-group terminology like "oxidizing", "Rusteceans", pictures of rusty equipment etc.

I still wish for a serious book in the style of K&R or Stroustrup with no pictures, no mentioning of the package manager at all (sometimes it is mentioned in the first chapter!) and interesting code examples.


>I think more people might adopt this language if there weren't such in-group terminology like "oxidizing", "Rusteceans", pictures of rusty equipment etc.

I write Rust and I agree. Hype is the downside of attracting trendiness.


The package manager is an integral part of how the language is intended to be used in everyday practice, so it makes sense that it would be mentioned in the book, though.


So if I am on a plane at 36,000 feet, I cannot start a new hobby project without paying for WiFi and downloading shit from people I don’t know or trust? That is just unserious, when your competition is the much more mature gcc+make+vim combo, which is 100% offline and self-contained.

And yes, I’ve done this (start a new project in a plane with no internet). Many times. eg: the world’s only PalmOS 5 device emulator (uARM-palm) was started while I was in a plane.


You can start a project just fine? Cargo does not require a connection to function - you only need one when it’s time to pull in external dependencies.

And if you really dislike Cargo, nothing is stopping you from using rustc + make directly.


If your hobby project consumes libraries which are not already locally available on your machine, this same restriction would also apply when using gcc/make/vim. (e.g. say you want to use zlib, you'll need the zlib-devel package)

If the packages are already available locally, cargo works offline.


Pretty absurd conflation you've got there.

The package manager just offers a common interface for interop, you can still build without dependencies.


Haha, nah. I think hackers have been naming things like this since the beginning of time and it's absolutely part of the culture.

I don't think everything needs to be corporate Memphis in text form. Let's have some squirrelfish extreme in this.

Corporate mode comes eventually with quality. And Rust has passed the initial hurdle of adoption. It's pretty much what Python was in the year 2000: the exciting new tech that has adoption.


We tend to forget, being so used to them, but both C and C++ have playful names (C is a successor to B, and C++ is an incremented version of C)

UNIX also has a playful name (from Multics)


What is “corporate Memphis”?


It's an art style that you'd recognize from thousands of slapped together 2000s web sites and PowerPoint slides. It's flat and colorful and a bit blobby. It has become a watchword for bland, boring, and approved by upper management.


A mysterious company in Illinois. If you sail your boardroom past Cairo there it is.


> no pictures

Damn, never heard of someone passing on a language because of pictures. First time for everything I suppose.


The reason I gravitated to rust is that it wasn't c++.

C++ has too many footguns and too much crap that makes it terrible... I am looking exactly at template compiler output and also cmake.

So, get on board or don't but don't try to make it C++ again.


Have you seen AAA gaming industry?


Why?


There are advantages of Rust over C, but I'm not convinced that they are enough to outweigh ~35 years of development, testing and production use of some of the GNU coreutils. I also do not think that there are any problems with the GNU Coreutils that the only possible solution is to rewrite them from scratch and throw away all of that hard work and lessons learned.


If you're gonna maintain it for the next 30 years, then sure.


The name of this piece of software makes me unreasonably happy.


It's kind of odd that this same article gets two separate front page threads in 4 days. It's a repost. https://news.ycombinator.com/item?id=43353240


It's because it has two URLs, so HN doesn't realize they're the same content.


When is the Rust rewrite of Juju coming?


Its always surprising to hear that people use this tool


Nah nobody outside Canonical uses it. Just sarcastically wondering why they don’t just abandon the thing.


When I read Juju I think you were talking about JuJu Densetsu xd.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: