Hacker Newsnew | past | comments | ask | show | jobs | submit | gbrown's commentslogin

Look into the cheap Sceptre displays. They make non-smart panels with decent resolution and performance, so long as you have external speakers and use the optical audio output instead of the output from their garbage DAC.


Time will tell, but in theory the Volts should last a really long time also. The generator only directly powers the drivetrain at highway speeds, and in the gen-1 they were really conservative with allowed pure electric range on the battery.

I'm planning to drive mine until it dies, and suspect that salty winter Midwestern roads will render it unsafe/broken before anything else does.


"Error: cannot start (no connectivity). Please call a licensed repair technician to service your vehicle."


In case you're not aware, this is a thing that's happened (well, without the error message, I assume)

https://www.theatlantic.com/ideas/archive/2019/09/zipcar-int...


Too helpful. More like "Failed with error 0xC0000005".

That's a real error Windows 11 gave me this week. I know, never install a Windows version less than two years old. It's been so long I forgot.


As if any Windows or in general Microsoft product gives useful error messages.

Hex error codes, or generic "Something went wrong" errors are the norm at Microsoft.


Not quite as bad, but when I tried out a car in 2008 with Ford/Microsoft's SYNC system (voice interface), there were cases where pressing an unconfigured button would disable the sound system until you restarted the engine. See number 8:

http://blog.tyrannyofthemouse.com/2008/07/setting-sync-strai...

Also, whenever the power goes out, I always express thanks that the toilets still work, and they didn't introduce some npm-style pointless dependency.


More than possible, it's common as an attack: cryptojacking.


I found the video quality and responsiveness to be far worse, and I have to teach and work over Zoom. I tried running the desktop client virtualized as well, with similar results.

So… sketchy it is. I find myself wondering what it’s doing with 15% CPU when not in a meeting. Feelsbad.


I ran it under strace and after that it was never allowed to run again. I don't know exactly what it's doing, but it's nothing I want it to be doing.

If you get an AMD GPU then the web experience is a bit more performant, the Intel iGPUs are not quite potent enough.


Mind if you elaborate on that? I just ran zoom with strace but see nothing out of ordinary.


Discreet GPUs support encoding/decoding a lot more streams than the integrated stuff. Depending on the generation of Quicksync you have, the codec used by Zoom might not be hardware accelerated at all, though I think Zoom uses h264, so you'd have to be on some pretty old hardware.


I don't know about zoom specifically, but most video conferencing programs counterintuitively don't use hardware video encoders/decoders.

They use the GPU for visual effects (background blur etc), but then do regular CPU video encoding. That's why they gobble so much CPU!

I think that helps with system compatibility. Hardware video encoders are full of bugs and corner cases, and it's very easy for someone to end up seeing a garbled image when the bitstream was truncated or bad in some way.


Is there something specific about the AMD drivers? I have a decent Nvidia card (I know, I know…)


Have you tried running the desktop client in a container? Lots of options for this on Linux.

There shouldn't be any significant virtualization penalty. My guess is that it uses hardware video codec acceleration and that wasn't accessible in the VM you had set up? If the guest was Windows, looks like nvidia has supported this since host driver version v465 or later.

https://nvidia.custhelp.com/app/answers/detail/a_id/5173/~/g...

HTH!


The systemd slice approach is the same mechanism as containers.

The security problem is being able to talk to the same X server as trusted applications. X clients can do pretty much all the things you don't want Zoom to do; look at your screen, observe your keystrokes, etc. (Sadly, many of Zoom's features, like screen sharing, are also great things for spyware to do in the background. Not saying Zoom does this, but if you don't trust them, this level of access is the part that worries people, not consuming too much CPU.)


> The security problem is being able to talk to the same X server as trusted applications

Use Ctrl+Alt+F<number> to switch into another VT and run a different X server. Run zoom in container there.

I found this a lot more convenient than messing with nested X servers and other types of X11 client isolation. Each time you leave an X server and switch to another VT, the clients perceive it like the monitor being turned on/off.


Thank you for this, easy solution that didn't cross my mind. I wanted to restrict Zoom from reading files (solved by a sandbox) while also sharing my screen from my normal environment (VM is out of the picture) but also preventing it from looking at the X clipboard and all that stuff.


I did, but had trouble getting it to work. It was a while ago though, and the image I was working from was old/unmaintained even at the time. Linux guest. Do you have any examples to hand?


Why run it when not in a meeting?


I do kill it, but I’m in and out of meetings all day, so it’s easy to forget.


It does keep running and using CPU after exiting though, so I always have to ‘pkill -f zoom’ it.


There’s a desktop Zoom client. Also, the web office 365 Word works… okish.


I’ve been playing with GPGPU on Julia lately also, and it really seems like things have come a long way in the last few years. Check out the Juliacon 2021 talk on GPU compute if you’re interested.



Probably more this talk about CUDA 3.0 https://live.juliacon.org/talk/UGX8YR or the workshop https://www.youtube.com/watch?v=Hz9IMJuW5hU


Yeah, I was thinking of the workshop.


Are AMD and NVidia GPUs on par nowadays? Or is it still an NVidia-first world when it comes to compute support.


There was actually a question about this earlier today: https://discourse.julialang.org/t/amdgpu-jl-status/71191

TLDR: The entire GPUArrays.jl test suite now passes with AMDGPU.jl. There are still some missing features and it is not as mature as the nvidia version, but this space is progressing rapidly, and benefited from the generic GPU compilation pipeline that was initially built for CUDA.jl


Keep in mind that AMDGPU.jl requires ROCm, which is basically dead (no recent GPUs support it and none of those that do are consumer-grade).

The problem with AMD GPGPU is not software, it is that AMD literally does not care.


Definitely not dead; Vega is well supported, and with some tweaks, Polaris probably works too (although it definitely was broken in HIP around ROCm 4.0.0 or so).

I think AMD has some work to do on non-C++/Python ecosystem engagement for sure, but they've built a foundation that's quite easy to build upon and get excellent performance and functionality; AMDGPU.jl is a testament to that.


The gfx10 line (6800XT et al) probably work out of the box on a recent release. I think some are even officially supported. I test on a 5700XT which I don't think is officially supported. The change to 32 wide wavefronts took a while to resolve.

Rocm gets releases every few months or so. The llvm project part is mirrored to GitHub in real time.


Can an Arch person explain to me why their approach is worth it over something with a more comprehensive package manager like apt or dnf? I don’t mind compiling programs myself when needed, but for most things I’m happy to not have to hand-hold my OS when it comes to updates.

From the wiki:

> Before upgrading, users are expected to visit the Arch Linux home page to check the latest news, or alternatively subscribe to the RSS feed or the arch-announce mailing list

Like… why?


I think you're confusing Arch with Gentoo or something - the Arch package manager is not from-source, it ships binaries just like apt. Perhaps you're thinking of the AUR, which does usually just host the PKGBUILD which you run makepkg on directly to compile, but that's analogous to something like an Ubuntu PPA, not the core package manager.

The main thing that people like about it is the rolling release model; new packages for virtually everything are updated within hours or days of an upstream release, with incredible practical stability.

> > Before upgrading, users are expected to visit the Arch Linux home page to check the latest news, or alternatively subscribe to the RSS feed or the arch-announce mailing list > Like... why?

That's very much a "cover-your-ass" type disclaimer, like a ToS that says you have no right to expect anything to work. In practice, 99.99% of upgrades work completely unattended, and in the .01%, you see a failure, you go to the News site and it says "sorry, we made a backwards-incompatible push, please delete this path before upgrading" or something like that, you do it, and then everything is fine again for another 18 months.

Arch still has the vestiges of this reputation as a wild-west distribution for reckless code cowboys, but in practice it is the de-facto "set it and forget it" distro. I spend literally 10x less time worrying about my distribution and package manager when I'm on Arch then on any other computing system I've ever encountered.


I have used Arch Linux for the past 8 years. I've had 3 installations on four different laptops (I migrated one installation to a second laptop).

Your comment would be a really great description of my experience.


Same here. I really don't understand why it has a reputation for instability. I use it on my home server.


With Arch, I've had several times on separate machines when after updating I have had to mess around with recovery stuff, manually boot and reinstall grub or whatever.

I really don't need cutting edge packages, so I don't use it any more, but I understand why people would want a lean system by default.


I've used Arch for the better part of the last decade and agree with this assessment as well.


> The main thing that people like about it is the rolling release model; new packages for virtually everything are updated within hours or days of an upstream release, with incredible practical stability.

Fedora Rawhide and openSUSE Tumbleweed are both nearly as up-to-date[1] as the Arch repos but they have package managers with correct dependency solvers and continuous integration pipelines with tests produce their repos. NixOS Unstable is more up-to-date than Arch Linux[1], and its package manager never breaks your system on upgrades and features automatic rollbacks no matter what filesystem you use.

‘I want a rolling release’ doesn't really explain the choice to use Arch in particular, imo, and it's weird that this extremely common answer to ‘why Arch’ talks about a feature that isn't really specific to Arch

1: https://repology.org/repositories/statistics/pnewest


I had really bad experience with Fedora and Arch Linux just didn't give me any problems. Maybe it's better now, I don't know.


I don't recommend using Rawhide, but standard Fedora is pretty up to date anyway, so it's not necessary.


It is probably a reference to the AUR, but its use is not as common as some people seem to think and is somewhat discouraged (since, like PPA, the packagers are not necessarily trusted). I would also have a hard time claiming that programs from the AUR are compiled yourself. Yes, the software is usually compiled on your own hardware. On the other hand, the compilation process is handled by makepkg or an AUR helper. With an AUR helper, the process is remarkably like installing a program with pacman since it will handle dependencies.


> It is probably a reference to the AUR, but its use is not as common as some people seem to think and is somewhat discouraged

Arch proper has like 60% the package count of openSUSE, fewer than 1/2 as many packages as Fedora, fewer than 1/3 as many packages as Debian, and fewer than 1/6 as many packages as NixOS.[1]

Maybe some of this is Arch having larger packages (splitting fewer of them out), but whatever fudge factor you wanna add in, the Arch repos are extraordinarily small. You have to get into really niche shit like Solus or Exherbo to find a distro with a smaller software selection than the Arch repositories.

The idea that Arch is as usable as most Linux distros without leveraging the AUR is ridiculous.

1: https://repology.org/repositories/statistics/total


What matters is the relevance of the packages in the main repositories, not the quantity. While the quantity will affect some people, it will primarily affect those who use obscure packages.

As for the fudge factor, it would be difficult to even agree upon criteria. For example: should python or rust libraries be included, given they have their own package managers?


> What matters is the relevance of the packages in the main repositories, not the quantity.

This is a good point. It would be awesome if we had the metrics to look at this. I would not be surprised if Arch had a good focus on popular packages.

And yeah, what's relevant will vary between users.

> As for the fudge factor, it would be difficult to even agree upon criteria. For example: should python or rust libraries be included, given they have their own package managers?

I don't think this particular case would be too tricky. We can probably exclude them, or just count them separately. Libraries packaged in the distro package manager are useful, but they're mostly useful for simplifying the process of creating new packages for the distro.


> The idea that Arch is as usable as most Linux distros without leveraging the AUR is ridiculous.

Not really. I don't have a single AUR package installed. The paperkey software used to be the only AUR package I had installed. It eventually became part of the official repositories.


>That's very much a "cover-your-ass" type disclaimer,

This is not true for all hardware configurations or true for all packages combination(including weird AUR ones) in the world. For sure if we Google if this really happens in the real world you will see that indeed update break things.

Also keeping up with upstream does not mean you only get the new features but also the new bugs, especially if you were using GNOME3 a fee years back at each new GNOME release the forums and reddit was filled with new memory leaks issue, new plugin/extension breakage issues and even GNOME not starting up.


Usually when gnome doesn't start up in arch it is due extensions which are not supported either gnome or arch. But you usually find them in AUR, which fix your issues quite quickly. I haven't had any issues with gnome 3 in arch since they move to it, apart from extensions and a couple of things not well integrated in Wayland+gnome. Said that, it has been much more a nightmare for me to install packages in docker images of Ubuntu.


My point is that in Arch you can't start your work day by updating your system, you might have to fix shit instead of working.

With an LTS distro I know when teh notification for updates appears that is a security thing and it is safe to update.

>Said that, it has been much more a nightmare for me to install packages in docker images of Ubuntu.

I am assuming you are trying to install something outside the official repos, like you want to get the latest node/python or some other latest stuff using a PPA. Those PPA might not be that good quality so you could get issues like conflicts. I am not a sysadmin or dev-ops guy to tell you what is the correct way to install newer version of stuff.


My point is that will lose more time installed unsupported packages than losing some time when arch breaks, because it rarely breaks (less than once a year). And fixes usually take 5 minutes


I think it depends on the user and hardware. Many years ago I had a laptop with an AMD GPU and CPU, it was new like 1 year old when AMD drop support for the driver. If I wanted decent compositing on Linux I had to stay with an older kernel and Xorg version so I used old Ubuntu LTSes and debian at that time. And i decided to never use AMD, I got an Intel+NVIDIA PC, but now it seems NVIDIA is the one with shit drivers, and I don't change my hardware often so I will continue using my GTX-970 as many years it will hold.


Many years ago is not the same in Linux in general. Try it now. I didn't have those issues in more than a decade. Everything worked out of the box


> I think you're confusing Arch with Gentoo or something - the Arch package manager is not from-source, it ships binaries just like apt. Perhaps you're thinking of the AUR

Sorry, what I meant was: when I need to manage the version of something carefully, I just compile it from source and that's OK with me. My understanding is that people use the AUR for this on Arch, and the pains don't seem worth it.

> The main thing that people like about it is the rolling release model

Fair enough, though I've been pretty happy with the pace of update from, for example, Fedora.

> That's very much a "cover-your-ass" type disclaimer, like a ToS that says you have no right to expect anything to work.

Fair enough


> Sorry, what I meant was: when I need to manage the version of something carefully, I just compile it from source and that's OK with me. My understanding is that people use the AUR for this on Arch, and the pains don't seem worth it.

Nobody's making you use the AUR! If you want to 'make && sudo make install' you can do that all day long.

The AUR value add is that other people have already figured out recipes for how to take the equivalent of 'make && sudo make install' and generate a package you can mange with the package manager.

There exist plenty of tools to automate all AUR interactions, but none of these will ever be included in Arch's main repos, since they are not a core part of Arch itself. This is to maintain a sharp delineation between properly supported Arch packages and the more wild west AUR recipes. That said, once you download a PKGBUILD from the AUR, you can use the same official tools to build and install the package that are used for the distro proper.

When I want to build from source, and something isn't in the AUR, I just spend the 5 minutes to make a proper PKGBUILD for myself. It is very easy and it simplifies management of things.


Arch is the first system I have been able to support, fully. As in, 100% of the issues I run across with my distro, I can resolve. I used to run Ubuntu as my gnome desktop distribution, and when it worked (99% of the time), it was a superior experience to Arch. However when running Ubuntu I would inevitably run across some issue that seemed to require a level of sysadmin chops that I never have possessed. For the past year I've been running an Arch desktop, I have resolved every issue by using the Arch wiki and Google/ stack overflow. I suspect that partly, the Arch approach is appealing to those of us who prefer a simpler system, because those are easier to grapple with in a support context.


This has been exactly my experience as well. Ubuntu would have fewer issues initially, and almost no setup, but after setup it would break more often and always find ways to break in new and interesting ways that were very difficult to resolve, and I never could understand what was wrong.

With Arch, I was able to fix every issue that came up, full stop. But it required much more setup. It also breaks way less often. Prior to Arch, I never really felt that "full-empowered linux-user" feeling. It was always voodoo. Now I DO get that feeling and I really feel in charge and in control of my system. Interestingly, I still run ubuntu server for a couple servers, (I generally prefer debian for servers, but that's a separate discussion.) and I still find the occasional issues that come up to be difficult-to-resolve voodoo, despite having a much greater level of understanding of how linux works and does things.


Would you recommend Arch to someone without a lot of Linux experience? Ubuntu has me thinking of switching to a different OS.


If you're interested, I'd recommend checking out the Arch wiki - imo it's one of the most comprehensive repositories of Linux info out there and pretty easy to follow. Even other distros use and link to it since it's very general and has a huge scope. Great reference for power users and starting point for beginners.


I’d recommend going for it, and as others have said, be prepared to read the Arch Wiki, a lot. I think what’s most important would be to simply have the guts and the inspiration to keep going, even if you think you’ve lost all hope. Personally, I started out my Linux journey with Ubuntu, then distro hopped and tried PopOS, and Ubuntu-based distro with extra things here and there. Then, I took a Linux course online (for free) and gave me general fundamentals, it advertises as the “The Start from scratch Linux course”. After that and spending tons of time on Reddit and seeing post after post and the memes about ‘I use arch btw’ I decided to try it out. It was definitely fun and a tad time consuming at first, but after that I’ve learned a ton more about Linux and how things work. I’ve only had a broken system a couple times. Again, the ArchWiki is your friend.


I had a similar experience when I was a kid, with Gentoo rather than Arch.

It doesn't stay hard for very long. And when it gets easier, it stays easier basically forever, no matter what distro you use.

Manually configuring everything with Arch is a pretty good way to learn a lot about what goes into a working GNU/Linux system, and not as painful as some people make it out to be.


Unpopular opinion: the only people I'd recommend Arch to are people without a lot of Linux experience (who are interested in learning).

Once you learn the basics of what goes into a distro and you know how to set things up and troubleshoot, there's no reason to use a distro with a package management story as backwards as Arch's.

After you're done with Arch, learn to write packages for a couple distros (practice building them on something like OBS[1], which lets you build and distribute packages for almost any distro). Then choose your distro based on the quality of the tooling it is built on and package whatever you need that isn't already in it.

1: https://build.opensuse.org/


What's so backwards about Arch's package management? I've written PKGBUILDs for my own software and used that to make packages I could install on my system. Works pretty well in my experience.


Lots of cosmetic warts, like combining nothing but CLI flags instead of a subcommand interface or the necessity to use subshells and pipelines to achieve functionality that's native and obvious in other package managers. Some of my favorites kinda suck that way too, though.

Poor support for managing multiple repositories: no facilities for it built into pacman, no notion of vendor (which is useful for managing packages that may be duplicated across repositories, but with different versions or build options), the main repos are small so a huge number of packages you might want to use have an unofficial status that is much more markedly second-class than on other distros (must be compiled from source/no binary caching, installation process is either very manual or requires unpackaged tools).

No support for treatment of past transactions in the CLI for ‘undo’-like behavior or rollbacks.

No tools for managing the behavior of the dependency resolver, like to make upgrades less destructive or to automatically retry solving with more aggressive solutions that involve more downgrades and removals.

No plugin architecture, so additional functionality like integration with CoW filesystems for snapshotting requires wrappers, which is clunky and may not be composable.

No support for declaring a version for pinned packages (just the stateful IgnorePkgs, which says ‘keep whatever I have’) or restricting upgrades based on constraints or classes (e.g., in Gentoo Portage).

And it doesn't really support any of the more interesting recent innovations, like installation/upgrade atomicity, installing multiple versions of things side-by-side, installing packages on a per-user basis, running multiple package management operations at the same time.

But the single most backwards thing is the whole situation with the AUR being in eternal limbo but also a de facto standard due to the small size of the official repositories.

Pacman does have some outstanding strengths relative to most package managers: speed (by a wide margin vs. most distros) and ease of writing packages. Another thing is that if you're unbothered by the awkward status of the AUR (and having to build its packages from source), Arch users don't typically do much repo management.


My personal experience with Linux has been Ubuntu ~1 week -> Debian 2 days -> Arch 11 years now.

It will require some time learning and reading through the wiki. I would definitely recommend trying it in a vm first.


I recommend checking out EndeavourOS. It's an Arch based OS that sets you up with a friendly installer and a desktop environment out of the box, then gets out of your way. You don't get the fun experience of installing arch from scratch but it's a gentler introduction to the ecosystem.

I switched from ubuntu to Endeavour as my first dive into Arch recently and have been happy with it.


Arch recently introduced a general prompt-style installer script that should be able to help you setup and install a working Arch on any system.

https://python-archinstall.readthedocs.io/en/latest/installi...


You can give it a try but be prepared to spend a lot of time reading the wiki


This is the best reason I've heard stated for preferring Arch. Thanks for sharing!


> a more comprehensive package manager like apt or dnf

I don't see how apt or dnf are any more comprehensive than pacman. What do you mean by that?

Before Arch, I used Fedora. It used yum as its package manager. That thing managed to corrupt its own databases at least twice during normal usage. Distribution major version upgrades always caused problems.

I never had problems like these after switching to Arch.

> I don’t mind compiling programs myself when needed

You only need to compile user packages. Official Arch Linux repositories host binary packages. You can download the PKGBUILD if you want.

> for most things I’m happy to not have to hand-hold my OS when it comes to updates.

99% of the time updates just work for me. Sometimes they introduce a few .pacnew files, I diff and merge them with my local files and that's it.

> Like… why?

Sometimes manual intervention is necessary. Usually it's not a big deal. The news tell you what to do and most importantly why you must do it.

The most complicated maintenance I ever experienced with Arch was when it switched /bin to /usr/bin.


Not PP, but to me it means much less manual intervention/more hooks etc. .

For instance, for debian I can just turn on automatic updates and basically never need manual intervention.

For arch I am not supposed to use automatic updates and have to (!) read the news.

Why? Why does arch need more manual intervention? Sure, I can do that but it just seems like a pointless waste of time.


> For instance, for debian I can just turn on automatic updates and basically never need manual intervention.

I question what sort of updates you're actually getting. Debian is known for being extremely outdated. This is a major reason for its stability.

Sometimes things change way too much. Sometimes they change in incompatible ways. Sometimes changes come from upstream and there's nothing the distribution can do about it. In these cases, our attention is required. Things break and we need to fix them. We need to adapt.

In order to avoid this, Debian must be outdated. It must avoid updates that break things and this necessarily means you end up using software that's years old. That's fine, it's a perfectly valid trade-off. I'm sure there are a lot of users out there whose wants and needs are perfectly filled by Debian.

If someone's interested in Arch, it's likely because of its huge repository of up-to-date unpatched software. The Arch user must be able to deal with change. Sometimes it's unavoidable and Arch culture makes it clear that users are expected to put such effort into their systems.


> years old.

where years<=2. Not a big deal, but yes can be annoying. That said upgrades between major versions are also usually automated and well tested (since they have lots of time to prepare and test that).

> users are expected...

The difference is what is considered "unavoidable". In particular, on other distros packagers are supposed to ... and only if that is not possible users are supposed to ...


I don't think Debian's automatic updates do major release upgrades automatically, do they? Those IIRC do require manual intervention - if nothing else you need to run the installer & possibly respond to prompts, but possibly more depending on your system.


> I don't think Debian's automatic updates do major release upgrades automatically, do they? Those IIRC do require manual intervention

For major release upgrades, the official upgrade procedure is to follow instructions like these: https://www.debian.org/releases/stable/amd64/release-notes/c...

So yeah, you have a somewhat manual upgrade process once every two years, if you're not on one of the rolling release (‘testing’ or ‘unstable’).

On the other hand, you do get to choose when you make those updates. You don't get caught by surprise with them because you forgot to read the news.

Debian's documentation on Testing and Unstable[1] contains some snippets that may feel familiar to Arch users, including this very relevant bit:

> Consider (especially when using unstable) if you need to disable or remove unattended-upgrades in order to control when package updates take place.

https://wiki.debian.org/DebianUnstable#What_are_some_best_pr...


> I don't see how apt or dnf are any more comprehensive than pacman. What do you mean by that?

In terms of the core functionality of package managers, they both have more robust dependency resolvers (and dnf's is actually complete[1]).

In the case of dnf, it's also more ‘comprehensive’ in the sense that the singular CLI tool handles more package management functionality (e.g., it includes repo management), and in the sense that it supports plugins.

They're also both more comprehensive in the sense that you don't need to resort to one of a dozen third-party ‘wrappers’ in order to use the bulk of packages available in those distros' ecosystems.

1: See the discussion of completeness here: https://arxiv.org/pdf/2011.07851.pdf


> they both have more robust dependency resolvers (and dnf's is actually complete[1])

> 1: See the discussion of completeness here: https://arxiv.org/pdf/2011.07851.pdf

That's interesting. In what ways are these resolvers superior to pacman? I never had dependency resolution issues. Can you help me understand with concrete examples? Pacman is not cited anywhere on that paper.

> you don't need to resort to one of a dozen third-party ‘wrappers’ in order to use the bulk of packages available in those distros' ecosystems

Are you referring to the AUR? I believe that's more of a man power issue. Arch is a smaller project compared to the other major distributions. There aren't enough maintainers for all packages.


> That's interesting. In what ways are these resolvers superior to pacman?

One good example is that even though PKGBUILDs can contain version constraints (see an example here[1]), that metadata is not always present and so it is underutilized. Pacman doesn't support ‘partial upgrades’[2] (once you refresh your package lists, installing anything is ‘unsupported’ until you upgrade everything), and this is why.

(I also think that paper's notion of ‘completeness’ could probably be enriched somehow, because I've seen situations where `apt-get` will crap out but `aptitude` will offer a ‘compromise’ solution which involves downgrading some packages or removing some, and generally package managers based on libsolv do even better IME. Here Arch likely falls flatter.)

Another depsolver related issue in Pacman (related to the lack of partial upgrades) is the lack of distinction between upgrades and dist-upgrades. In apt and dnf, upgrades are non-destructive by default, meaning that they don't offer solutions that involve removing or downgrading user-selected packages. Pacman has no such distinction.

> I never had dependency resolution issues. Can you help me understand with concrete examples?

One fairly common case is that Arch just ignores the dependencies of AUR-installed packages at install time, freely upgrading packages without respect to reverse-dependencies that aren't declared in a repo.[4] Hence, ‘if packages in the official repositories are updated, you will need to rebuild any AUR packages that depend on those libraries’... every single time you upgrade, if you've installed anything from the AUR, it can leave your system with broken packages. Apt and dnf, in contrast, treat every package you install the same way. Additionally, Arch packages don't always declare version constraints for their library dependencies, and there's no CI that tests for ABI changes (there is some in Debian, although such tools can't work perfectly). So you have to use another tool (apparently one popular choice is some script from the Arch forums in 2005, lol[5]) to scan for such breakages, or else just discover them when packages don't work.

On the other hand, when Arch does consider the version constraints of installed packages, the lack of partial upgrades can be problematic for downstream distros. Any version constraints placed by downstream repos on dependencies shared with upstream can just leave you totally unable to upgrade anything at all for a while.[6]

1: https://github.com/archlinux/svntogit-packages/blob/master/d...

2: https://wiki.archlinux.org/title/System_maintenance#Partial_...

3: https://wiki.archlinux.org/title/Pacman/Rosetta#Basic_operat...

4: https://wiki.archlinux.org/title/Arch_User_Repository#Instal...

5: https://bbs.archlinux.org/viewtopic.php?id=13882

6: https://superuser.com/questions/1497098/pacman-unable-to-upd...


> Pacman doesn't support ‘partial upgrades’[2] (once you refresh your package lists, installing anything is ‘unsupported’ until you upgrade everything), and this is why.

> Another depsolver related issue in Pacman (related to the lack of partial upgrades) is the lack of distinction between upgrades and dist-upgrades.

Yes. Personally, I believe that these are features rather than issues. I don't ever want my system to be in a partially upgraded state. I treat inability to fully upgrade as a maintenance problem that I have to solve.

I'm sure there's a lot of people out there who get a lot of use out of these partial upgrades. I'm not one of them. Stuff like apt updates vs upgrades only confused me when I used those systems. I suspect other Arch users have similar opinions.

> every single time you upgrade, if you've installed anything from the AUR, it can leave your system with broken packages

> there's no CI that tests for ABI changes

Yes, those are fair points. I suppose I don't feel this pain because I don't actually use the AUR very often. When ABIs are broken, Arch maintainers will recompile and update all affected packages. Naturally, AUR packages will not be included...


> Yes. Personally, I believe that these are features rather than issues. I don't ever want my system to be in a partially upgraded state. I treat inability to fully upgrade as a maintenance problem that I have to solve.

You don't ever have to install without upgrading on any other package manager or distro, either, though. And the way Pacman refuses to run `pacman -Syu` if some packages can't be upgraded doesn't doesn't really save you from partial upgrades, because nothing actually stops you from running `pacman -Sy <package name>`, and that is a thing people do.

> Yes, those are fair points. I suppose I don't feel this pain because I don't actually use the AUR very often. When ABIs are broken, Arch maintainers will recompile and update all affected packages. Naturally, AUR packages will not be included...

For some years (longer than I ever continuously ran Arch) I used to run Sabayon Linux. It had its own package manager, Entropy, which was hugely impressive to me at the time. It supported all of Portage's masking facilities for managing and constraining version, but it was centered on binary packages, and it was really, really fast.

At the same time, it was sort of compatible with Portage, so you could install software with `emerge` and then reconcile the Entropy package database with the newly-installed outside packages, I think with `equo spmsync`, or something like that. Of course, working this way was totally unsupported, but it was also perfectly reliable, if you knew what you were doing. Just make sure to run `revdep-rebuild && equo spmsync` after every `equo upgrade`, or whatever.

In a way, it was very similar to Arch, except instead of the AUR, you had all of Gentoo, and, if you wanted, the overlay system (its third-party repos). The integration was a little tighter, and Portage was/is a full-fledged package manager that sees use as a core tool for other distros, not one of a dozen competing wrappers around an unofficial source control repo and Entropy, so that side of things was much more powerful as well.

It was pretty cool. But the whole bifurcation between the worlds of binary packages and the source-based package management system was a persistent annoyance. There was always some hope and desire that in the future, they could be better integrated.

Arch seems content to have this kind of eternal twilight, with a package manager that's sort of source-based and sort of binary, and to get a whole package manager out of the source-based side you need some third-party wrapper tools. Then the AUR is this de facto source-based community repo with extraordinarily low packaging standards, and it never gets binary caching. It just feels half finished, and the roadmap for Arch seems to be to leave it that way forever. (I'm sure many packages graduate into the community repos all the time, which is great.)

But there are full-fledged source-based package managers now (Nix, Guix, Homebrew) where binary caching is totally transparent. There's no two kinds of repos, one source-based and one binary, and if you modify a package that's part of the main repos, the package manager just chugs along and builds it from source like nothing happened. And when it's done, it's a first-class citizen of your system no matter where it came from.

You can basically learn not to use things from the AUR because they're second-class, especially if you maintain your own local repository or you contribute to the Arch repos. It seems lots of people do. But the way many, many people use the distro is still fundamentally split between two worlds, just like the way I used Sabayon more than 10 years ago.


As another plus, the Arch wiki itself is absolutely fantastic. People will point to the Arch wiki even when running other distributions. For example, it is the place to go when doing something like GPU passthrough to another OS running on qemu/KVM.


Which, honestly, is grating.

It's great that the Arch wiki is as good as the Gentoo wiki was in 2002, but it would be even better if the Arch wiki actually acknowledged the people doing the work. For GPU passthrough, for example, the initial author/current maintainer of VFIO published a development blog which has a [multi-part series explaining VFIO and passthrough from the bottom up](http://vfio.blogspot.com/2015/05/vfio-gpu-how-to-series-part...) six years ago.

This is not referenced anywhere in the Arch wiki, despite the fact that it's the literal author, most of the steps in their wiki haven't changed in the intervening years, and it's almost certain that whatever place the authors of that wiki page eventually cribbed it from probably came from the original blog.

The Arch wiki contributors, in this sense, aren't great netizens. Worse, the Arch wiki (and various subreddits) are almost as bad as the Arch/Ubuntu forums were in 2005. They often lead to a bunch of "shotgun debugging" where users are copy and pasting things they don't understand at all in the hopes that it will fix whatever problem they're encountering for reasons they won't understand.

Arch is fine, and it has its place. There are some brilliant people using Arch. The community in general is full of people who intentionally shoot themselves in the foot and are then proud that they find superglue for the wound on the Arch wiki instead of using a distro with better engineering practices where they never would have had these problems at all. The mistaken belief that doing any of this somehow "teaches" you meaningful things about Linux as opposed to solving real problems (since 99% of the "problems" Arch users encountered will never be seen on other distros, due to the fact that the maintainers carefully ensure there are limited footguns out of the) is terrible.


> Worse, the Arch wiki (and various subreddits) are almost as bad as the Arch/Ubuntu forums were in 2005. They often lead to a bunch of "shotgun debugging" where users are copy and pasting things they don't understand at all in the hopes that it will fix whatever problem they're encountering for reasons they won't understand.

This drives me absolutely fucking nuts.

> The community in general is full of people who intentionally shoot themselves in the foot and are then proud that they find superglue for the wound on the Arch wiki instead of using a distro with better engineering practices where they never would have had these problems at all.

This. A thousand times, this.

> The mistaken belief that doing any of this somehow "teaches" you meaningful things about Linux as opposed to solving real problems (since 99% of the "problems" Arch users encountered will never be seen on other distros, due to the fact that the maintainers carefully ensure there are limited footguns out of the[m]) is terrible.

Idk. There are definitely some Arch-specific footguns (like the lack of distinction between upgrade and dist-upgrade, so that ordinary pacman updates can do things like uninstall literally all of your kernels (lmfao)). But I don't think the basic approach is necessarily fatally flawed. When I installed Gentoo for the first time as a kid, getting everything working taught me:

  - how to identify hardware using using common utilities (like `lsusb`, `lspci`, and `lshw`
  - how to set up a chroot environment, how to use a chroot to manage or repair another system
  - how to install and configure a bootloader, what configuration a bootloader needed
  - how to use basic CLI networking tools to get online
  - how to manage kernel modules (blacklisting them or adding them to initrd), although admittedly a good distro will *usually* be able to anticipate those needs for you
  - how to think about package version constraints and manage packages from different sources
  - fundamentals of building and managing software (i.e., what compile-time options are, how to think about dependencies and reverse dependencies)
Probably the first two are the most valuable, and I guess nowadays the Arch installation tools basically hide what is going on in the chroot environment from you (and actually make it tricky to customize, like if you want to add extra mountpoints to it). But I don't think the whole ‘set everything up yourself once’ approach is worthless.


That's sort of my point. I'm also an old fogey in Linux hipster-land, who started off with RH5, then moved to Mandrake, then Gentoo somewhere around the kernel 2.2->2.4 transition.

It was really important then to know how to identify hardware so you could actually have it supported in your kernel (I don't remember if `genkernel` didn't exist yet or whether I was just trying to squeeze out as much performance as I could -- probably the latter). But it was also the era of winmodems, winprinters, risk of actual damage to your monitor if you screwed up the modes in X11R5/6.conf, we had to use `lilo` and remember to update it every time, etc, etc.

A lot of the people I talk to know who end up in the same positions as me still use those skills -- but we use them at distro vendors to make sure that 'normal' users never need to worry about it. Honestly, with the way Linux has been adopted, my expectation would be that by the time I exit the industry, people with the skillsets you and I have will be rare, and mostly unnecessary. Linux "just works" on the vast majority of hardware these days, and we old fogeys put a lot of blood, sweat, and tears into making that so.

It's not that I think that it's useless, it's that it's not _required_ knowledge anymore, and anyone who is convincing themselves that it's giving them deeper knowledge considering the vast increase in complexity is kidding themselves. In a pre-EFI world where all you needed was a binary (any binary) located at `/init` which "knew" how to handle everything else, it was great.

At this point, if I were starting from scratch, I'd tell people try to really understand how EFI works (https://www.happyassassin.net/posts/2014/01/25/uefi-boot-how...), get a handle on IOMMU groups+SRIOV/nvme namespacing/whatever, and learn as much as possible about network namespacing and how SDN/CNI work, so "how does a packet get from the outside all the way to a pod || EC2/openstack instance || whatever" is reasonable, and that's not even touching "how does `dracut`/`mkinitcpio` come up and hand off to systemd+cgroups", because those are the areas where things are likely to blow up, rather than "whoops, you forgot to build the driver for your HBA into your kernel and now you can't boot", or "X11R6 completely shit the bed after a driver update broke your Xinerama config".

Different years, different problems, different things are important. What was crucial for us to learn in 2000 hardly matters in 2022 when an Arch live USB will more or less boot on any system anywhere and get you a working framebuffer, with a couple of commands to bring up your system.


> Can an Arch person explain to me why their approach is worth it over something with a more comprehensive package manager like apt or dnf?

Can you explain to me how dnf or apt is more comprehensive than pacman? I use all three: arch on my laptop, fedora on my desktop, ubuntu on my work laptop. I do not see the difference in comprehensiveness.

There are some house cleaning tasks pacman won't automatically do for you because doing so could break things you rely on. The same is true on fedora. It'll leave configs untouched, unless you run rpmconf which might then just break your stuff:

> If you use rpmconf to upgrade the system configuration files supplied with the upgraded packages then some configuration files may change. After the upgrade you should verify /etc/ssh/sshd_config, /etc/nsswitch.conf, /etc/ntp.conf and others are expected. For example, if OpenSSH is upgraded then sshd_config reverts to the default package configuration. The default package configuration does not enable public key authentication, and allows password authentication.

(From https://docs.fedoraproject.org/en-US/quick-docs/dnf-system-u...)

The problem is ultimately one of churn, and how the system deals with it. Anecdotally Ubuntu tries to deal with it harder than the others, and my experience is that Ubuntu breaks (or suddenly stops behaving the way you had it configured) the most during updates. The others break less but require some attention from you.

Some of the churn is caused by distros, some of it is caused by the upstream projects. Churn is big in the Linux world.


> Can an Arch person explain to me why their approach is worth it over something with a more comprehensive package manager like apt or dnf? I don’t mind compiling programs myself when needed, but for most things I’m happy to not have to hand-hold my OS when it comes to updates.

It sounds like you may be confusing Arch with some other distro. You rarely if ever need to compile anything yourself. Pacman works just like apt or dnf, i.e. resolves dependencies, downloads and installs packages for you, unless you have something specific in mind.


In the last two years I’ve been on the arch-announce mailing list I think I have only needed to respond to breaking updates twice.

I choose arch for three reasons. 1. The official repos and the AUR have nearly every package I have ever needed. And usually packages are updated soon after a release. 2. Being rolling release, I never need to reinstall arch, just run updates periodically. 3. I love learning, and I have learned more about Linux and system maintenance from arch than anything else. While there might be a slightly larger cost of time spent setting up (and maintaining when I break something) arch, I have decided that the tradeoffs are worth it to me.


I'm honestly not sure what you mean by apt or dnf being more comprehensive. The feature set of all Linux package managers are pretty similar. The major difference with Arch is you're heavily recommended not to do partial upgrades, but pacman will do it if you really want to. That's a difference in update philosophy between batched releases and rolling releases, not a difference in the package managers.

If you mean comprehensive in terms of available software, corporate and commercial software seems to often offer debs and rpms but not tarballs installable by pacman. On the other hand, for anything open source, the Arch official repository plus AUR has way more packages available than the Debian/Ubuntu and Redhat official repos, and having everything in one AUR for third-party packages is much more convenient than the apt/dnf way of adding a repo per vendor.

As for checking the home page every time you upgrade, you really don't need to. I think that's to stave off complaints if something breaks, because it might since you have full freedom to set things up however you want and Arch can't guarantee the standard packages with standard settings are going to work for the combinatorial explosion of possible individual setups everyone might have. But in five years of daily Arch use (I have it as the OS on 8 devices in my house right now), I've auto-upgraded daily and experienced one breakage I can think of, two days ago when certain graphical apps stopped showing a visible window. It was annoying and I still don't know why it happened (guessing something about the Wayland/NVIDIA combo is still creating issues), but it fixed itself on the next ugprade 7h hours later or so.


> package managers are pretty similar. The major difference with Arch is you're heavily recommended not to do partial upgrades, but pacman will do it if you really want to. That's a difference in update philosophy between batched releases and rolling releases, not a difference in the package managers.

No it’s a difference in package managers. Pacman doesn’t take into account library versions when resolving dependencies, it’s why partial upgrades aren’t supported because the only way to ensure every package you have installed is linked against the version of its dependencies you have installed is to have every package on your system come from a snapshot in time of the whole repo package tree.

Better package managers don’t have this problem and understand how to not break your system with partial upgrades. This matters as soon as a new version of a package has a bug and you want to downgrade it, or you build and install a package from the AUR which, when you later update your system, could need rebuilding to continue working, but pacman has no way to tell you when this is the case.


> > package managers are pretty similar. The major difference with Arch is you're heavily recommended not to do partial upgrades, but pacman will do it if you really want to. That's a difference in update philosophy between batched releases and rolling releases, not a difference in the package managers.

> No it’s a difference in package managers. Pacman doesn’t take into account library versions when resolving dependencies, it’s why partial upgrades aren’t supported because the only way to ensure every package you have installed is linked against the version of its dependencies you have installed is to have every package on your system come from a snapshot in time of the whole repo package tree.

Ding, ding, ding! This is the same dumb behavior that Homebrew has for the same dumb reason that the lead maintainer discussed here on HN just a few days ago.[1]

Pacman is extraordinarily naive as a package manager. And that's just talking about the absolute bare minimum, main job of a package manager, never mind the more peripheral features (like repo management) that are commonly incorporated into modern package managers like dnf and zypper nowadays, the lack of useful abstractions and metadata (like the representation of vendor and vendor change), or the comparatively obtuse CLI vs. modern subcommand interfaces.

If Arch Linux is for users who want to understand their systems, both because having them set it up themselves is supposed to ensure they understand it better and because its tooling is supposed to be kept simple so as to make it easier to understand, one would think these differences would be more transparent to Arch users. But perhaps in many cases it's been a while since they used other tools, and they never dug that deep into them.

1: https://news.ycombinator.com/item?id=29081756


Sometimes when visiting arch forums the undertone is a little gatekeep-ey and people asking for more beginner friendly ways to install software like GUIs or AUR helpers are responded to with answers like 'You don't. You compile it yourself from the command line'.


For a more beginner-friendly approach to Arch, try Manjaro. The user experience is much better: you can choose one out of several desktop environments and get sane defaults, has its own system that can easily swap between different drivers and kernels, and generally very robust overall. Also the forums are more friendly towards beginners, so I view it as Arch without the elitism. The package updates are usually several weeks behind from Arch (since it uses a curated snapshot of Arch), but I view this as a plus (in reality you don’t need that much bleeding-edge updates).


i say this as a windows user for my workplace, but that's not being gatekeepers, it's upholding the ethos of the distribution. i've used arch quite a bit as a hobby linux and the reality is that i've learned more about linux via arch documentations and by being curious about how to resolve things instead of demanding an easy path. the knowledge gained produces the easy path.


The the ethos of the distribution is gatekeeping :)


> Can an Arch person explain to me why their approach is worth it over something with a more comprehensive package manager like apt or dnf? I don’t mind compiling programs myself when needed, but for most things I’m happy to not have to hand-hold my OS when it comes to updates.

People who like Arch because they think the AUR is actually good hate doing repo management. What they like about the AUR is that it's One Big Repo, and it (unlike the barren Arch repos themselves) is pretty comprehensive.

> > Before upgrading, users are expected to visit the Arch Linux home page to check the latest news, or alternatively subscribe to the RSS feed or the arch-announce mailing list > Like… why?

Because Arch's interpretation of ‘keep it simple, stupid’ means they are allergic to engineering in their distro tools. As a result, their package manager has deficient dependency resolution behavior. This is exacerbated by the fact that the devs make relatively little use of things like transitional packages, for some reason. But Pacman is fast, because by choosing not to have a complete dependency solver, it avoids tackling a problem with high computational complexity. For some people, that part of the user experience is good enough that it allows them to forgive Pacman for doing insane things like pointlessly breaking installed software every now and again.


You don't have to run arch if you are happy with your Ubuntu or whatever other distro. I run arch because I like trying out new software when its released, not when maintainers of ubuntu decide to include it in the next release cycle. You are pretty much always on the latest kernel, for good or bad. aur also is a gem compared to apt when it comes to modifying in-tree packages and maintaining those with the system package manager.

But well, if you are happy with your distro you don't have to use anything else.


Sometimes there are some manual interventions that you may have to perform.


Yeah, exactly. Why?


You can see the announcements at https://archlinux.org/. The most recent is from June:

> Starting with libxcrypt 4.4.21, weak password hashes (such as MD5 and SHA1) are no longer accepted for new passwords. Users that still have their passwords stored with a weak hash will be asked to update their password on their next login.If the login just fails (for example from display manager) switch to a virtual terminal (Ctrl-Alt-F2) and log in there once.

I wasn't affected. The next one before that was February, and also didn't affect me.

I think I could count the number of such planned manual interventions that have affected me in the 6 years I've been running Arch on my laptop on the fingers of one hand. It's approximately the number of times I would have had to reinstall my OS from scratch in that time on most other distros, based on extensive prior experience of whole distro version upgrades messing things up in mysterious ways. I put this down to the rolling release and the arch devs not being lulled onto assuming everyone's running a fresh, pristine installation.

I have a 6 year old, heavily used (including for work), heavily customised development laptop I have installed the OS on exactly once, and I have absolutely no reason to contemplate starting again from scratch. It's bang up to date and rock solid. You'd have to pry Arch from my cold, dead fingers.


Looking through the latest advisories of upgrades requiring manual intervention, those mostly seem to be files that were mishandled. I guess they want to avoid "being smart" and trying to second guess the system setup.

Other distributions attempt to migrate the config / tools which mostly works, except when it doesn't. Earlier today I upgraded an Ubuntu 21.04 to 21.10. The computer is a glorified Spotify Connect player, so I don't configure anything on it. But for some reason, after the reboot, there's some issue with gvfsd-something-or-other. I never configured anything related to that. Is this normal / expected? No idea. A quick search on the release notes [0] yields nothing.

So I guess there are always tradeoffs. Arch seems to adopt more of a hands-off approach, where you only get a basic system and then you build your own environment. As such, there's many possible variations. In contrast to Ubuntu / Fedora / etc, where the devs can reasonably expect that a system is in a roughly known state.

[0] https://discourse.ubuntu.com/t/impish-indri-release-notes/21...


Well, manual interventions are rare [0] and almost all of them nowadays are due to the odd package restructuring. Usually the package manager will notify you about a conflict between two packages and won't proceed (so nothing will break). At this point you can check the website if there is a need to force install a package or two.

Although definitely more technical than most distributions the perceived difficulty of Arch is mostly a meme at this point. The last large possibly-system-breaking change was almost 10 years ago [1]. And even then, the solution was quite trivial. Now if you are forcing updates that conflict without reading the news then you're in for a bad time, but that's true for all distributions. In general pacman is very conservative and won't leave your system partially updated. Now there is a chance upstream updates break things, but that's the nature of the rolling release model.

Manual compilations are not necessary if you stick to the official repositories. If you need a package in the AUR then a ports-like setup is required. I have packaged stuff for both RPM and DEB-based distributions, nothing really beats the simplicity and flexiblity of the Archlinux packaging tools.

[0]: https://archlinux.org/news/

[1]: https://archlinux.org/news/the-lib-directory-becomes-a-symli...


It's the philosophy of Arch to stick to vanilla as much as possible and keep things simple. It's a rolling distro too with no fixed release cycles. When you upgrade fedora, ubuntu, etc. they perform various scripts to migrate existing configuration. In Arch, it just simply installs the vanilla packages whenever you tell pacman to update. Very rarely there is some breaking change, maybe once or twice a year, that requires manual intervention. Yeah they could automate it all but such stuff takes effort and breaks in other ways.


Because many of the changes are large enough that it will break, and that's expected.

The KISS principle applies here.

If a config file format changes in a service between version 3 and version 4, should the package manager be responsible for it? Or the admin?

Sometimes it's not just merging changes in.

In a non-rolling release distribution, you only need to worry about those changes during major upgrades. In a rolling release distro, they can change at any time. It's no different than a user reading the release notes for Debian 11 while upgrading from 10, except the upgrades are constant.


For one, not putting every single edge case into the package manager makes the behaviour of the package manager easier to understand.


Pacman now tells you when there is an announcement in the website. Most of this announcements are due some issue introduced in a package. I haven't faced all of them as usually they resolved quickly and when I update the system the new packages solves the issue. Rarely there has been a breaking change in a package that needed some easy manual intervention. I have maybe done this line 5 or 6 times in 15 years. Compared to Ubuntu, I find it upgrading process more tedious (I haven't tried it in the last year's so maybe now is better). Said that, probably i won't use arch for a production environment and stick to Ubuntu, but for home/work system, I love it


They announce (known) breaking changes that may require manual intervention. Meanwhile, when my ubuntu upgrade breaks something there is never any release notes or documentation to help me fix it. After my previous ubuntu upgrade at work, the screen locker is segfaulting instead of locking my screen, and clicking links inside the Slack crashes slack...


You can either deal with it once in while or you could let it pile up for years and then when a new major release comes up, you could spend days troubleshooting or reinstalling from scratch.

I think either method is fine, depending on the circumstances. Your choice.


Upstream breaks your stuff, you roll into the incompatible release, getting to fix it yourself.

There's no fixed release schedule that promises total compatibility at the cost of running years old releases.


Not to mention all the people that were on Instagram and WhatsApp before the sale… If a competitor emerges, it will play out the same way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: