Hacker Newsnew | past | comments | ask | show | jobs | submit | ch_123's commentslogin

> After spending time on Apple’s M1/M2 Macs (coming from a large x86_64 desktop), going back to x86_64 feels like a regression, both in performance and battery life.

I have a Thinkpad X1 with a Lunar Lake CPU, running Fedora. Battery life is comparable to the Mx Macbook Pros I've owned or used. Performance is not as good on synthetic benchmarks, but more than good enough for my needs, even when running VMs or containers.


I have a Strix Halo laptop, HP ZBook Ultra G1a. (HP is a weird brand. I'm not a loyal customer, but every once in a while they create a product with really good reviews, I buy it, and it delivers.) Performance is almost on par with Apple's best, but battery life under light load is much worse :P, 6:30 or so.

Under full load, battery life is an hour or so, similar to Apple actually! If the numbers I've seen are correct, they also use a lot of power under full load.

Also, thank $deity for engineered noise signatures. Whooshing is not so bad. Whining fans are the worst. Last heard in better laptops several years ago.


The VT1xx keyboards used linear switches, you may want to look at modern mechanical keyboards with linear switches as a rough approximation. Having tried typing on one in a museum, I recall the VT1xx as relatively scratchy compared with more modern keyboards, although that could have been a wear and tear issue.

The keyboards of the VT2xx/3xx series are awful, and the later ones had rubber dome keyboards which are among the nicer rubber dome keyboards I've tried. I own both a VT320 and VT420, and managed to get a new old stock keyboard for each.


Converting these keyboards to speak to a modern PC involves replacing the controller board with a new one with open source firmware. You can program the keys to send any scan code you like.


I've heard "square root of physical memory" as a heuristic, although in practice I use less than this with some of my larger systems.


The proper rule of thumb is to make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens.


That's not so much a rule of thumb as an assessment you can only make after thorough experimentation or careful analysis.


It doesn't take that much experimentation, though. Either set up not enough swap and keep increasing it by a little bit until you stop needing to increase it, or set up too much, and monitor your max use for a while (days/weeks), and then decrease it to a little more than the max you used.


I went with "set up 0 swap" and then never needed to increase it. I built my PC in 2023, when RAM prices were still reasonable, stuck 128GiB of ECC DDR5 in, and haven't run into any need for swap. Start with 0, turn on zswap, and if you don't have enough RAM then make a swap file & set it up as backing for zswap.


You don't need "horough experimentation or careful analysis". Just keep free swap space below few hundred megabytes but above zero.


"Keep swap space below few hundred megabytes but above zero" is a good example of a rule of thumb.

"Make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens" is not.


I ran Linux without swap for some years on a laptop with a large-for-the-time amount of RAM (about 8GB). It _mostly_ worked, but sudden spikes of memory usage would render the system unresponsive. Usually it would recover, but it in some cases it required a power cycle.

Similarly, on a server where you might expect most of the physical memory to get used, it ends up being very important for stability. Think of VM or container hosts in particular.


I dont get why anti-swap is so prevalent in Linux discussions. Like, what does it hurt to stick 8-16-32gb extra "oh fuck" space on your drive.

Either you're going to never exhaust your system ram, so it doesn't matter, minimally exhaust it and swap in some peak load but at least nothing goes down, or exhaust it all and start having things get OOM'd which feels bad to me.

Am I out of touch? Surely it's the children who are wrong.


The pro-swap stance has never made sense to me because it feels like a logical loop.

There’s a common rule of thumb that says you should have swap space equal to some multiple of your RAM.

For instance, if I have 8 GB of RAM, people recommend adding 8 GB of swap. But since I like having plenty of memory, I install 16 GB of RAM instead—and yet, people still tell me to use swap. Why? At that point, I already have the same total memory as those with 8 GB of RAM and 8 GB of swap combined.

Then, if I upgrade to 24 GB of RAM, the advice doesn’t change—they still insist on enabling swap. I could install an absurd amount of RAM, and people would still tell me to set up swap space.

It seems that for some, using swap has become dogma. I just don’t see the reasoning. Memory is limited either way; whether it’s RAM or RAM + swap, the total available space is what really matters. So why insist on swap for its own sake?


You're mashing together two groups. One claims having swap is good actually. The other claims you need N times ram for swap. They're not the same group.

> Memory is limited either way; whether it’s RAM or RAM + swap

For two reasons: usage spikes and actually having more usable memory. There's lots of unused pages on a typical system. You get free ram for the price of cheap storage, so why wouldn't you?


This rule of thumb is outdated by two decades.

The proper rule of thumb is to make the swap large enough to keep all inactive anonymous pages after the workload has stabilized, but not too large to cause swap thrashing and a delayed OOM kill if a fast memory leak happens.


That's not useful as a rule of thumb, since you can't know the size of "all inactive anonymous pages" without doing extensive runtime analysis of the system under consideration. That's pretty much the opposite of what a rule of thumb is for.


You are right, it is not a rule of thumb, and you can't determine optimal swap size right away. But you don't need "extensive runtime analysis". Start with a small swap - a few hundred megabytes (assuming the system has GBs of RAM). Check its utilization periodically. If it is full, add a few hundred megabytes more. That's all.


It's not like it's easy to shuffle partitions around. Swap files are a pain, so you need to reserve space at the end of the table. By the time you need to increase swap the previous partition is going to be full.

Better overcommit right away and live with the feeling you're wasting space.


> Swap files are a pain

Easier than partitions:

    mkswap --size 2G --file swap.img
    swapon swap.img


Yeah, until you need to hibernate to one. I understand that calculating file offsets is not rocket science, but still, all the dance required is not exactly uninvolved and feels a bit fragile.


Exactly opposite. Don't use swap partitions, and use swap files, even multiple if necessary. Never allocate too much swap space. It is better to get OOM earlier then to wait for unresponsive system.


Swap partition is set and forget. Can be detected by label automatically, never fails.

Swap file means fallocating, setting extended attributes (like `nocow`), finding file offset and writing it to kernel params, and other gotchas, like btrfs not allowing snapshotting a subvolume with an active swap file.

Technically it's preferable, won't argue with that.


Hast thou discovered our lord and savior LVM?


> There’s a common rule of thumb that says you should have swap space equal to some multiple of your RAM.

That rule came about when RAM was measured in a couple of MB rather than GB, and hasn't made sense for a long time in most circumstances (if you are paging our a few GB of stuff on spinning drives your system is likely to be stalling so hard due to disk thrashing that you hit the power switch, and on SSDs you are not-so-slowly killing them due to the excess writing).

That doesn't mean it isn't still a good idea to have a little allocated just-in-case. And as RAM prices soar while IO throughput & latency are low, we may see larger Swap/RAM ratios being useful again as RAM sizes are constrained by working-sets aren't getting any smaller.

In a theoretical ideal computer, which the actual designs we have are leaky-abstraction laden implementations of, things are the other way around: all the online storage is your active memory and RAM is just the first level of cache. That ideal hasn't historically ended up being what we have because the disparities in speed & latency between other online storage and RAM have been so high (several orders of magnitude), fast RAM has been volatile, and hardware & software designs or not stable & correct enough such that regular complete state resets are necessary.

> Why? At that point, I already have the same total memory as those with 8 GB of RAM and 8 GB of swap combined.

Because your need for fast immediate storage has increased, so 8-quick-8-slow is no longer sufficient. You are right in that this doesn't mean you need 16-quick-16-slow is sensible, and 128-quick-128-slow would be ridiculous. But no swap at all doesn't make sense either: on your machine imbued with silly amounts of RAM are you really going to miss a few GB of space allocated just-in-case? When it could be the difference between slower operation for a short while and some thing(s) getting OOM-killed?


Swap is not a replacement for RAM. It is not just slow. It is very-very-very slow. Even SSDs are 10^3 slower at random access with small 4K blocks. Swap is for allocated but unused memory. If the system tries to use swap as active memory, it is going to become unresponsive very quickly - 0.1% memory excess causes a 2x degradation, 1% - 10x degradation, 10% - 100x degradation.


What is allocated but unused memory? That sounds like memory that will be used in the near future and we are scheduling in an annoying disk load when it is needed

You are of course highlighting the problem that virtual addressing was intended to over abstract memory resource usage, but it provides poor facilities for power users to finely prioritize memory usage.

The example of this is game consoles, which didn't have this layer. Game writers had to reserve parts of ram fur specific uses.

You can't do this easily in Linux afaik, because it is forcing the model upon you.


Unused or Inactive memory is memory that hasn't been accessed recently. The kernel maintains LRU (least recently used) lists for most of its memory pages. The kernel memory management works on the assumption that the least recently used pages are least likely to be accessed soon. Under memory pressure, when the kernel needs to free some memory pages, it swaps out pages at the tail of the inactive anonymous LRU.

Cgroup limits and OOM scores allow to prioritize memory usage on a per-process and per-process group basis. madvise(2) syscall allows to prioritize memory usage within a process.


There is too much focus in this discussion about low memory situations. You want to avoid those as much as possible. Set reasonable ulimit for your applications.

The reason you want swap is because everything in the Linux (and all of UNIX really) is written with virtual memory in mind. Everything from applications to schedulers will have that use case in mind. That's the short answer.

Memory is expensive and storage is cheap. Even if you have 16 GB RAM in your box, and perhaps especially then, you will have some unused pages. Paging out those and utilizing more memory to buffer I/O will give you higher performance under most normal circumstances. So having a little bit of swap should help performance.

For laptops hibernation can be useful too.


It's true that if you always have free RAM, you don't need swap. But most people don't have that it can always be used as a disk cache. Even if you are just web browsing, the browser is writing to disk stuff fetched from the internet in the hopes it won't change, the OS is will be keeping all of that in RAM until no more will fit.

Once the system has used all available RAM if has for disk cache it has a choice if it has swap. It can write write modified RAM to swap, and use the space it freed for disk cache. There is invariably some RAM where that tradeoff works - RAM use by login programs, and other servers that haven't been accessed in hours. Assuming the system is tuned well, that is all that goes to swap. The freed RAM is then used for disk cache, and your system runs faster - merely because you added swap.

There is no penalty for giving a system too much swap (apart from disk space), as the OS will just use it up until the tradeoff doesn't make sense. If your system is running slow because swap is being overused the fix isn't removing swap (if you did you system may die because of lack of RAM), it's to add RAM until swap usage goes down.

So, the swap recipe is: give your system so much swap you are sure it exceeds the size of stuff that's running but not used. 4Gb is probably fine for a desktop. Monitor it occasionally, particularly if your system slows down. If swap usage ever goes above 1Gb, you probably need to add RAM.

On servers swap can be used to handle DDOS from malicious logins. I've seen 1000's of ssh attempts happen at once, in an attempt to break in. Eventually the system will notice and firewall the IP's doing it. If you don't have swap, those login's will kill the system unless you have huge amounts of RAM that isn't normally used. With swap it slows to a crawl, but then recovers when the firewall kicks in. So both provisioning swap and having loads of RAM prevent DDOS's from killing your system, but this is in a VM, one costs me far more per month than the other, and I'm trying fix to a problem that happens very rarely.


> There is no penalty for giving a system too much swap (apart from disk space)

There is a huge penalty for having too much swap - swap thrashing. When the active working set exceeds physical memory, performance degrades so much that the system becomes unresponsive instead of triggering OOM.

> Monitor it occasionally, particularly if your system slows down.

Swap doesn't slow down the system. It either improves performance by freeing unused memory, or it is a completely unresponsive system when you run out of memory. Gradual performance degradation never happens.

> give your system so much swap you are sure it exceeds the size of stuff that's running but not used. 4Gb is probably fine for a desktop.

Don't do this. Unless hibernation is used, you only need a few hundred megabytes of free swap space.


> There is a huge penalty for having too much swap - swap thrashing.

Thrashing is the penality for using too much swap. I was saying there is no penality for having a lot of swap available, but unused.

Although trashing is not something you want happening, if your system is thrashing with swap the alternative without having it available is the OMM killer laying waste to the system. Out of those two choices I prefer the system running slowly.

> Gradual performance degradation never happens.

Where on earth did you get that from? It's wrong most of the time. The subject was very well researched in the late 1960's and 1970's. If load ramps up gradually you get a gradual slowdown until the working set is badly exceeded, then it falls off a cliff. This is a modern example, but there are lots of papers from that era showing the usual gradual response, followed by falling off a cliff: https://yeet.cx/r/ayNHrp5oL0. A seminal paper on the subject: https://dl.acm.org/doi/pdf/10.1145/362342.362356

The underlying driver for that behaviour is the disk system being overwhelmed. Say you have 100 web workers that that spend a fair chunk of their time waiting for networked database requests. If they all fit in memory the response is as fast as it can be. Once swapping starts latency increases gradually as more and more workers are swapped in and out while they wait for clients and the database. Eventually the increasing swapping hits the disk's IOPS limit, active memory is swapped out and performance crashes.

The only reason I can think the gradual slow down is not obvious to you is that modern SSD's are so fast, the initial degradation it's not noticeable to desktop user.

> Don't do this. Unless hibernation is used, you only need a few hundred megabytes of free swap space.

A you seem to recognise having lots of swap on hand and unused, even it it's terabytes of it does not effect performance. The question then becomes: what would you prefer to happen in those rare times when swap usage exceeds the optimal few hundred megabytes? Your options are get your desktop app randomly killed by the OOM killer and perhaps lose your work, or the system slows to a crawl and you take corrective action like closing the offending app. When that happens it seems it's popular to blame the swap system for slowing their system down because they temporarily exceeded the capacity of their computer.


> Thrashing is the penality for using too much swap. I was saying there is no penality for having a lot of swap available, but unused.

Unless you overprovision memory on a machine or have carefully set cgroup limits for all workloads, you are going to have a memory leak and your large unused swap is going to be used, leading to swap thrashing.

> the OMM killer laying waste to the system. Out of those two choices I prefer the system running slowly.

In a swap thrashing event, the system isn't just running slowly but totally unresponsive, with an unknown chance of recovery. The majority of people prefer OOM killer to an unresponsive system. That's why we got OOM killer in the first place.

> If load ramps up gradually you get a gradual slowdown until the working set is badly exceeded, then it falls off a cliff.

Random access latency difference between RAM and SSD is 10^3. When the active working set spills out into swap, liner increase of swap utilization leads to exponential performance degradation. Assuming random access, simple math gives that 0.1% excess causes a 2x degradation, 1% - 10x degradation, 10% - 100x degradation.

> A seminal paper on the subject: https://dl.acm.org/doi/pdf/10.1145/362342.362356

This paper discusses measuring stable working sets and says nothing about performance degradation when your working set increases.

> https://yeet.cx/r/ayNHrp5oL0.

WTF is this graph supposed to demonstrate? Some workload went from 0% to 100% of swap utilization in 30 seconds and got OOM-killed. This is not going to happen with a large swap.

> Once swapping starts latency increases gradually as more and more workers are swapped in and out while they wait for clients and the database

In practice, you never see constant or gradually increasing swap I/O in such systems. You either see zero swap I/O with occasional spikes due to incoming traffic or total I/O saturation from swap thrashing.

> Your options are get your desktop app randomly killed by the OOM killer and perhaps lose your work, or the system slows to a crawl and you take corrective action like closing the offending app.

You seem to be unaware that swap thrashing events are frequently unrecoverable, especially with a large swap. It is better to have a typical culprit like Chrome OOM-killed than to press the reset button and risk filesystem corruption.


> Unless you overprovision memory on a machine or have carefully set cgroup limits for all workloads, you are going to have a memory leak and your large unused swap is going to be used, leading to swap thrashing.

You seem to be very certain about that inevitable memory leak. I guess people can make their own judgements about how inevitable they are. I can't say I've seen a lot of them myself.

But the next bit is total rubbish. A memory leak does not lead to thrashing. By definition if you have a leak the memory isn't used, so it goes to swap and stays there. It doesn't thrash. What actually happens if the leak continues is swap eventually fills up, and then the OOM killer comes out to play. Fortunately it will likely kill the process that is leaking memory.

I've used this behaviour to find which process had a slow leak (it had to be running for months). This has only happened once in decades mind you - these leaks aren't that common. You allocate a lot of swap, and gradually it is filled by the process that has the leak. Because swap is so large once the process leaking memory fills it, it stands out like dogs balls because it's memory consumption is huge.

You notice all of this because, like all good sysadmins, you monitor swap usage and receive alerts when it gets beyond what is normal. But you have time - the swap is large, the system slows down during peaks but recovers when they are over. It's annoying, but not a huge issue.

> In a swap thrashing event, the system isn't just running slowly but totally unresponsive

Again, you are seem to be very certain about this. Which is odd, because I've logged into systems that were thrashing which means they didn't meet my definition of "totally unresponsive". In fact I could only log in because the OOM killer had freed some memory. The first couple of times the OOM killer took out sshd and I had to each for the reset button, but I got lucky one day and could log in. The system was so slow it was unusable for most purposes - but not for the one thing I needed, which was to find out why it had run out of memory. Maybe we have different definitions of "totally", but to me that isn't "totally". In fact if you catch it before the OOM killer fires up and kills god knows what, these "totally unresponsive systems" are salvageable without a reboot.

> This paper discusses measuring stable working sets and says nothing about performance degradation when your working set increases.

Fair enough. Neither link was good.

> You seem to be unaware that swap thrashing events are frequently unrecoverable, especially with a large swap.

Perhaps some of them are, but for me it wasn't the swapping that did the system in. It is always the OOM killer.

> It is better to have a typical culprit like Chrome OOM-killed than to press the reset button and risk filesystem corruption.

The OOM killer on the other hand leaves the system in some undefined state. Some things are dead. Maybe you got lucky and it was just Chrome that was killed, but maybe your sound, bluetooth, or DNS daemons have gone AWOL and things just behave weirdly. Despite what you say, the reset button won't corrupt modern journaled filesystems as they are pretty well debugged. But applications are a different story. If they get hit by a reset or the OOM killer while they are saving your data and aren't using sqlite as their "fopen()", they can wipe the file you are working on. You don't just lose the changes. The entire document is gone. This has happened to me.

I'd take the system taking a few minutes to respond to my request to kill a misbehaving application over the OOM killer any day.


> You seem to be very certain about that inevitable memory leak.

It is fashionable to disable swap nowadays because everyone has been bitten by a swap thrashing event. Read other comments.

> A memory leak does not lead to thrashing. By definition if you have a leak the memory isn't used, so it goes to swap and stays there.

You assume that leaked memory is inactive and goes to swap. This is not true. Chrome, Gnome, whatever modern Linux desktop apps leak a lot, and it stays in RSS, pushing everything else into swap.

> if the leak continues is swap eventually fills up, and then the OOM killer comes out to play

You assume that the OOM killer comes out to play in time. The larger the swap, the longer it takes for the OOM killer to trigger, if ever, because the kernel OOM-killer is unreliable, so we have a collection of other tools like earlyoom, Facebook oomd and systemd-oomd.

> I've logged into systems that were thrashing

It means that the system wasn't out of memory yet. When it is unresponsive, you won't be able to enter commands into an already open shell. See other comments here for examples.

> The OOM killer on the other hand leaves the system in some undefined state. Some things are dead. Maybe you got lucky and it was just Chrome that was killed, but maybe your sound, bluetooth, or DNS daemons have gone AWOL and things just behave weirdly.

This is not true. By default, the kernel OOM-killer selects one single largest (measured by its RSS+swap) process in the system. By default, systemd, ssh and other socket-activated systemd units are protected from OOM.


> It is fashionable to disable swap nowadays because everyone has been bitten by a swap thrashing event.

If they disable swap they will get hit by the OOM killer. You seem to prefer it over slowing down. I guess that's a personal preference. However, I think it is misleading to say people are being bitten by a swap thrashing event. The "event" was them running out of RAM. Unpleasant things will happen as a consequence. Blaming thrashing or the OOM killer for the unpleasant things is misleading.

> You assume that leaked memory is inactive and goes to swap. This is not true.

At best, you can say "it's not always true". It's definitely gone to swap in every case I've come across.

> It means that the system wasn't out of memory yet.

Of course it wasn't out of memory. It had lots of swap. That's the whole point of providing that swap - so you can rescue it!

> When it is unresponsive, you won't be able to enter commands into an already open shell.

Again that's just plain wrong. I have entered commands into a system is trashing. It must work eventually if thrashing is the only thing going on, because when the system thrashes the CPU utilization doesn't go to 0. The CPU is just waiting for disk I/O after all, and disk I/O is happening at a furious pace. There's also a finite amount of pending disk I/O. Provided no new work is arriving (time for a cup of coffee?) it will get done, and the thrashing will end.

If the system does die other things have happened. Most likely the OOM killer if they follow your advice, but network timeouts killing ssh and networked shares are also a thing. If you are using Windows or MacOS, the swap file can grow to fill most of free disk space, so you end up with a double whammy.

Which brings me to another observation. In desktop OS's, the default is to provide it, and lots of it. In Windows swap will grow to 3 times RAM. This is pretty universal - even Debian will give you twice RAM for small systems. The people who decided on that design choice aren't following some folk law on they read in some internet echo chamber. They've used real data, they've observed when swapping starts being used systems do slow down giving the user some advance warning, when thrashing starts systems can recover rather than die which gives the user opportunity to save work. It is the right design tradeoff IMO.

> By default, the kernel OOM-killer selects one single largest (measured by its RSS+swap) process in the system.

Yes, it does. And if it is a single large process hogging memory you are in luck - the OOM killer will likely do the right thing. But Chrome (and now Firefox) is not a single large process. Worse if the out of memory is caused by say someone creating zillions of logins, they are so small they are the last thing the OOM killer chooses. Shells, daemons, all sorts of critical things go first. The "largest" process first is just a heuristic, one which can be and in my case has been wrong. Badly wrong.


> You seem to prefer it over slowing down.

An unresponsive system is not a slowdown. You keep ignoring that.

>> You assume that leaked memory is inactive and goes to swap. This is not true.

> At best, you can say "it's not always true".

You skipped my sentence that was specifying the scope when "it's not always true", and now you pretend that I'm making a categorical generalized statement. This is a silly attempt at a "strawman".

>> It means that the system wasn't out of memory yet.

> Of course it wasn't out of memory. It had lots of swap. That's the whole point of providing that swap - so you can rescue it!

Swap is not RAM. When the free RAM is below the low watermark, the kernel switches to direct reclaim and blocks tasks that require free memory pages. Blocking of tasks happens regardless of swap. If you are able to log in and fork a new process, the system is not below the low watermark.

>> When it is unresponsive, you won't be able to enter commands into an already open shell.

> Again that's just plain wrong.

You are in denial.

> Provided no new work is arriving (time for a cup of coffee?) it will get done, and the thrashing will end.

This is false. A system can stay unresponsive much longer than a cup of coffee. There is no guarantee that the thrashing will end in a reasonable time.

> even Debian will give you twice RAM for small systems.

> The people who decided on that design choice aren't following some folk law on they read in some internet echo chamber.

That 2x RAM rule is exactly that - an old folk law. You can find it in SunOS/AIX/etc manuals or Usenet FAQs from the 80s and early 90s, before Linux existed.

> They've used real data.

You're hallucinating like an LLM. No one did any research or measurements to justify that 2x rule in Linux.


Another factor other commenters haven't mentioned, although the article does bring it up: you may disable swap and you will still get paging behavior regardless, because in a pinch the kernel will reclaim pages that are mmapped to files. Most typically binaries and librairies. Which means the process in question will incur a map page read next time it schedules. But of course you're out of memory, so the kernel will need to page out another process's code page to make room, and when that process next schedules... Etc.

This has far worse degradation behavior than normal swapping of regular data pages. That at least gives you the breathing space to still schedule processes when under memory pressure, such as whichever OOM killer you favor.


Binaries and libraries are not paged out. Being read-only, they are simply discarded from the memory. And I'll repeat, actively used executable pages are explicitly excluded from reclaim and never discarded.


The reason you're supposed to have swap equal in size to your RAM is so that you can hibernate, not to make things faster. You can easily get away with far less than that because swap is rarely needed.


> so that you can hibernate

The “paging space needs to be X*RAM” and “paging space needs to be RAM+Y” predate hibernate being a common thing (even a thing at all), with hibernate being an extra use for that paging space not the reason it is there in the first place. Some OSs have hibernate space allocated separately from paging/swap space.


I do wish there was a way to reserve swap spaces for hibernation that don't contribute to the virtual memory. Else by construction the hibernation space is not sufficient for the entire virtual memory space, and hibernation will fail when the virtual memory is getting full.


this. i don't even want swap for my apps. they allocate to much memory as it is. i'd rather they be killed when the memory runs out or simply be prevented from allocating memory that's not there. the kind of apps that can be safely swapped out are rarely using much memory anyways.

but i do want hibernate to work.


You're implying that people are telling you to set up swap without any reason, when in fact there are good reasons - namely dealing with memory pressure. Maybe you could fit so much RAM into your computer that you never hit pressure - but why would you do that vs allocating a few GB of disk space for swap?

Also, as has been pointed out by another commenter, 8GB of swap for a system with 8GB of physical memory is overkill.


I'm also in the GP's camp; RAM is for volatile data, disk is for data persistence. The first "why would you do that" that needs to be addressed is why volatile data should be written to disk. And "it's just a few % of your disk" is not a sufficient answer to that question.


> RAM is for volatile data, disk is for data persistence.

Genuinely curious where this idea has come from. Is it something being taught currently?


No, not currently -- since the start of computers. This is quite literally part of Computing 101; see https://web.stanford.edu/class/cs101/lecture02.html#/9 , slides 10-12.

You can ask your favourite search engine or language fabricator about the differences between RAM and disk storage, they will all tell you the same thing. Frankly, it's kind of astonishing that this needs to be explained on a site like HN.


I have no idea where on those slides it says non-volatile storage should not be used for non-permanent, temporary data.

It does note main differences (speed, latency, permanence). How does that limit what data disk can be used for?

What would one use optane DIMMs for?

Also, if my program requires huge working set to process the data, why would I spend the effort and implement my own paging to templrary working files, instead of allocating ridiculous amount of memory and letting OS manage it for me? What is the benefit?


Because of cost - particularly given the current state of the RAM market. In order to have so much memory that you never hit memory spikes, you will deliberately need to buy RAM to never be used.

Note that simply buying more RAM than what you expect to use is not going to help. Going back to my post from earlier, I had a laptop with 8GB of RAM at a time where I would usually only need about 2-4GB of RAM for even relatively heavy usage. However, every once in a while, I would run something that would spike memory usage and make the system unresponsive. While I have much more than 8GB nowadays, I'm not convinced that it's enough to have completely outrun the risk of this sort of behaviour re-occuring.


how much swap do you have? i have 16GB now, and 16GB ram. i had a machine before with 48GB ram. obviously having more ram and no swap should perform better than the same amount of memory split into ram and swap.


8-16-32gb of swap space without cgroup limits would get the system into swap thrashing and make it unresponsive.


I think it's some kind of misplaced desire to be "lightweight" and avoid allocating disk space that cannot be used for regular storage. My motivation way back when for wanting to avoid swap was due to concerns about SSD wear issues, but those have been solved for a long time ago.


Swap causes thrashing, making the whole system unusable, instead of a clean OOM kill


IMO OOM killing should be reserved for single processes misbehaving. When a lot of different applications just use a decent amount of memory and exhaust the system RAM swapping to disk is the appropriate thing to do.


When you set cgroup limits, you tell the kernel how to determine when a process is misbehaving and needs to be OOM-killed.


swap causes thrashing if you have too large swap and no cgroup limits.


1) in the Microsoft days I would have a lot of available ram, bur windows still would aggressively swap, and I would get enraged when changing to an app that would have to swap in while I had 4gb of memory free

2) the os tried to be magical, but a swap thrash is still crap... I would much rather oom kill apps than swap thrash. For a desktop user: kill the fucking browser or electron apps, don't freeze the system/ui.


I had a similar experience with Kubuntu on a xps13 from 2016 with only 8GB of RAM and the system suddenly freezing so hard that a hard reboot was required. While looking for the cause, I noticed that the system had only 250 MB of swap space. After increasing that to 10 GB there have been no further instances of freezing so far.


They've been trying for nearly 30 years:

https://en.wikipedia.org/wiki/Intel740


Some folks have had success running it on certain server hardware (Usually HPE Proliant). There are no graphics drivers for x86, so it is X forwarding only.


> SP2 came out it was hated again with renewed vigor

Was it? My memory is that SP2 was the point at which most outlets considered to be "good".


No, SP2 was received about as well as Windows 11 was. Googling "xp sp2 problems" gets you forum gold such as https://bobistheoilguy.com/forums/threads/windows-xp-sp2-suc...

But I don't really need sources, as Eldond said: "I was there, Gandalf. I was there three thousand years ago."


I would like to see ReactOS succeed for various reasons, mainly philosophical. On the other hand, for practical real-world use cases, it has to compete with several alternative solutions:

1. Just use Windows 11. Yes, it sucks and MS occasionally breaks stuff - but at least hardware and software vendors will develop their code against Win 11 and test it. In other words, you have the highest likelihood that your computer will work as expected with contemporary Windows applications and drivers.

2. Use an older version of Windows. If you want to use old hardware or software, odds are you will get the best experience with whatever version of Windows they were developed/tested against. You have to accept the lack of support for modern software, and you will need to take appropriate security measures such as not connecting it to the internet - but at the same time, it's unlikely that your Windows 98 retro gaming rig is your only computer, so that's probably an acceptable tradeoff.

3. Run WINE on top of Linux (or some other mature open source operating system). This might not be a good solution for the average person, but ticks the box for people who feel strongly pro-open source, or anti-Microsoft. Since Windows compatibility is dictated by Windows' libraries and frameworks and not the kernel, compatibility is likely to be comparable to ReactOS.

I am not saying that this covers every possible use case for ReactOS, but I would posit it covers enough that the majority of people who might contribute or invest into ReactOS will instead pick one of the above options and invest their time and energy elsewhere.


IIRC ReactOS uses and contributes heavily to WINE. So in many ways your #3 isn't far from using ReactOS, and if done correctly it'll be friendlier for the average person than Linux itself.


No, the Wine developers refuse to accept contributions from ReactOS developers or even people who have seen ReactOS code[0]. So any improvements go one way only.

[0] https://gitlab.winehq.org/wine/wine/-/wikis/Clean-Room-Guide... (last "Don't" entry)


So they don't use LLMs to help code at all?

LLMs have likely seen the leaked Windows source code lets be honest...


Of course not. You would be surprised how many developers don't even consider using an LLM in their workflow, myself included. Can't wait for this hype to end.


from what i have experienced in the last couple of weeks, it is not going to. There is a new paradigm.


Oh it will.

Firstly, neither OpenAI nor Anthropic is profitable, by a wide margin — investors are going to get impatient at some point.

Secondly, people that aren't enthusiastic about this whole thing are already experiencing something of an AI fatigue with all the AI features violently shoved into them by most software products they use. Being involuntarily subjected to slop in various online spaces can't be good either.

Thirdly, remember NFTs? So many people swore they were The Future™... until they weren't. But at least in that case it was much more obvious how stupid the whole idea is. The scale of the hype was also several orders of magnitude less.


Even if all major provides close down, it doesn't remove what's already out there. Glm / minimax / deepseek / gpt-oss may not be at the same level as current frontier, but you can download them and they're still very capable.


Crazy stupid ideas like cars with only touchscreens have still taken a decade to come in and then to get considered ill-advised even though anyone driving a car could tell how bad of an interface it is. We are still not fully out through the other side.

So while OpenAI or Anthropic are maybe not profitable today, they've got at least 5 years to figure it out. And there is already talk of inserting ads into the "chat", but hopefully that does not work!

But really, LLMs are useful (yes, sometimes only in appearance, but sometimes for real), and with that, there will continue to be investment into them until they are made profitable.


My PC can give decent code recommendations locally, not relying on anything in the cloud.


Google can run the models profitable, thanks to their custom tpus. The rest? No idea


Google and Meta can both subsidize their AI divisions with their ad money.


That too, but I think someone said that Google AI cost is true costs. Unlike others.


new != good

therefore

new != better


Fascinating. Direct link to upstream source: https://bugs.winehq.org/show_bug.cgi?id=50464#c6


You are saying that ReactOS doesn't use clean room code? Source?


I'm saying nothing, i posted the link of the Wine developers claim for why not accepting contributions by ReactOS developers since the post i replied to wrote that ReactOS contributed to Wine.


I believe the integrity of ReactOS's clean room reverse engineering has been called into question in the past when it was found that there were some header or code files with sections that matched leaked Windows Server 2003 code or something like that. Can't recall for sure though.


The article mentions this:

"In January 2006, concerns grew about contributors having access to leaked Windows source code and possibly using this leaked source code in their contributions. In response, Steven Edwards strengthened the project’s intellectual property policy and the project made the difficult decision to audit the existing source code and temporarily freeze contributions."

The allegations have been taken seriously and since then the procedure for accepting contributions include measures to prevent such further events from occurring. If you or anyone else happen to have any plausible suspicion, then please report it to the ReactOS team, otherwise keeping alive this kind of vague and uncertain connection between some Windows code leakage and ReactOS fits the very definition of FUD: https://en.wikipedia.org/wiki/Fear,_uncertainty,_and_doubt Please stop.


It's common anti-ReactOS slander.

I keep seeing it pop up over the years. Never substantiated.


They posted their source for their claim (which is different than yours). Click and read it.


I read it, and "not appropriate for Wine" was a non-answer, so I followed the footnote link and got to the same discussion:

https://bugs.winehq.org/show_bug.cgi?id=50464#c6

Which isn't really a discussion, it just ends with the same question "why not?".


>it'll be friendlier for the average person than Linux itself.

I think the myth that Windows is easier needs to die. The builds targeted at Windows users are very easy to use; You would likely go into the Command Prompt as much as you would with Windows, and the "average person" spends more time on their non-windows phone than they do in Windows.

I am a 30+ years Windows developer, who thought he would never move, but who migrated literally a week ago, the migration was surprisingly painless and the new system feels much more friendly, and surprisingly, more stable. I wrote it up on my blog, and was going to follow it up with another post about all the annoyances in my first full week, but they were so petty I didn't bother.


You are still in the honeymoon phase. I see a lot of those blogpost in the last months.

In a few weeks you will bump into something that isn't simple and friendly and you will curse that stupid linux. Something that trivially works in windows and is impossible or insanely hard in linux. That is often the time people go back. Old habits die hard.

But still you are 100% right. Windows is not easier. I know because I went from dos to linux and only occasionally dabbled in windows. And I have exactly the same sort of trouble as soon as I try to do something non trivial in windows. Including bumping into stuff that should be trivial but suddenly is impossible or insanely hard.

For years I have seen people say that windows is easier, while actually windows is just more familiar.

My (completely non computer savvy) parents and in-laws are on ubuntu/mint since 2009 and it was the best decision ever to switch them over. And they don't understand why people say linux is hard either (though my father in law still calls it 'Ubantu Linox' for some reason :-P )

At the start I had a small doubt if I should push them to macOS (OSX at the time) as then apple's fanatical dedication to userfriendlyness paid off. But I decided against it because I didn't feel like paying apple prices for my own hardware and it seemed ill advised to manage their systems while not using it myself. I'm very glad about that because apple has gone downhill immensely since ~2009 (imo)


I agree. One can just install Linux Mint or Fedora or anything and then Linux is just as friendly to use. You got a desktop, you can use your mouse to start up the browser, install applications with a mouse click, and so forth. You could do without opening up the terminal. Functionally the same as using Windows.


I would like to see many non technical people doing that, and then their experience trying to watch Netflix, Prime, HBO, YouTube,...

Linux experience is ok, when one knows UNIX and is technically skilled.


My parents, and my wife's parents, have been doing just that without any trouble whatsoever.

Just browse to netflix.com and log in. Not any different than in windows.

My parents use mail, firefox and libreoffice writer. That's about all they need and it works fine and is way more stable and hasslefree on linux than when it was when they were still using windows (admittedly quite long ago).

And if you are talking about seeing people install the OS, people can't do that for windows either.


Pretty much my experience too. Not just with my parents but with many other adults too.


It certainly is, because I still don't see GNU/Linux desktops on sale, other than the short lived netbooks movement.

So normal people have stores with other people that they can talk to when they have problems, or just drag their computer into the store.

With Linux it is always the relative that happens to be around, or drive in on purpose, and had to manually install the <insert favourite distro> of the day.


> With Linux it is always the relative that happens to be around

That's certainly true. And it's a chicken-vs-egg problem that's hard to solve. But it doesn't really have anything to do with which system is easier to use. It has much more to do with Microsoft's past unfair business practices (asking shops more for windows licenses if they happened to sell computers with something else than windows on it comes to mind) and the slowness of retail in adapting. Selling computers is way down (most people don't need more than a tablet/phone), selling in physical stores is way down (has moved online). Shops are not going to spend money on training their salespeople in linux. Most of the time they won't even really know windows.


> I still don't see GNU/Linux desktops on sale

Oh come on. Play fair. You are perhaps the single HN commentator whose input I most respect, because so many of your opinions overlap my own...

But that is not fair or right.

GNU/Linux desktops (and laptops) on sale:

https://itsfoss.com/get-linux-laptops/

Fairly prominent Linux-only hardware vendors doing R&D:

https://system76.com/

https://www.tuxedocomputers.com/en

A pure Linux-only consumer PC on mainstream sale:

https://store.steampowered.com/steamdeck

A compatible 3rd party machine of the same design:

https://www.lenovo.com/gb/en/p/handheld/legion-go-series/leg...

And of course retail GNU/Linux machines that cost 1/4 of a cheap Apple Mac and yet have outsold them by revenue not number of units for nearly a decade now:

https://www.google.com/chromebook/shop-chromebooks/

Yes this is absolutely happening. This is a real international market with sales in the hundreds of millions of units. This is not some tiny obscure niche that can be skipped over.


You can actually get a Thinkpad X1 Carbon with Linux on Lenovo US (and many other countries) pages: https://www.lenovo.com/us/en/configurator/cto/index.html?bun...

I am sure it even applies to laptops like T and P series too.

And Dell was the pioneer of the big makes with XPS 13 (and they still seem to do them: https://www.dell.com/en-us/shop/dell-laptops/xps-13-laptop/s...).


You are so eager to reply that you haven't even read the whole comment.

> So normal people have stores with other people that they can talk to when they have problems, or just drag their computer into the store.

Which of those online stores have a physical address for the normal people to do as per my comment?

Linux forums have enough complaints about those fairly prominent Linux-only vendors, even though they are suppose to control the whole stack.

And they also fall into each having their own <favourite distro>, the other part of the comment that you missed as well.

Normal people aren't using SteamDecks for their daily computing activities.

I use Linux in various forms since 1995, and yet I am tired from trying out such alternatives, the only things that makes me consider it again is breaking the dependency on US tech, and even that isn't really happening, given how much from Linux contributions are on the pockets from US Big Tech.


> You are so eager to reply that you haven't even read the whole comment.

Of course I did. I didn't address your objections because I think they don't hold up, that is why.

> Which of those online stores have a physical address for the normal people to do as per my comment?

Leaving out Apple as computers are not its primary product line any more... that leaves Lenovo, the biggest PC vendor in the world, followed by HP, Dell, Asus, Acer.

https://www.statista.com/statistics/267018/global-market-sha...

That is the top 5.

Only Apple has retail shops worldwide. I do not know of physical stores for any of the others. Maybe some did once, years ago, but that stuff is fading away and dying now. It's all going online.

You can certainly buy Chromebooks in physical stores. Do they fix them? Only warranty repairs, but the point of Chromebooks is that you don't keep your stuff on them, and you don't upgrade them. Rightly or wrongly (that is, mostly wrongly) they are disposable tech.

It is perfectly possible to buy a computer with Linux on it: a choice of Linuxes, from a choice of vendors, in almost any country. No you can't walk into a shop and try it, but you mostly can't from any vendor. Online sales are the default for many things now. No you can't walk into the vendor's shop and get it fixed, but you can't for any of global PC brands either.

If you want that, go to a local small business. If you want Linux, go to a local small business. Same thing.

Sure there are different flavours and distros. That is _not_ a weakness of Linux. Choice is a good thing, even if sometimes it is scary. You can choose your toothpaste and your clothes and your car as well. We manage.


> I wrote it up on my blog, and was going to follow it up with another post about all the annoyances in my first full week, but they were so petty I didn't bother.

May we have a link, please?


Just Finished => https://rodyne.com/?p=3524 - I guess I'm still in the honeymoon phase, as another poster so eloquently put it.


This isn't really my arena, but I did happen to recently compare the implementation of ReactOS's RTL (Run Time Library) path routines [0] with Wine's implementation [1].

ReactOS covers a lot more of the Windows API than Wine does (3x the line count and defines a lot more routines like 'RtlDoesFileExists_UstrEx'). Now, this is not supposed to be a public API and should only be used by Windows internally, as I understand it.

But it is an example of where ReactOS covers a lot more API than Wine does or probably ever will, by design. To whom (if anyone) this matters, I'm not sure.

[0] https://github.com/reactos/reactos/blob/master/sdk/lib/rtl/p...

[1] https://github.com/wine-mirror/wine/blob/master/dlls/ntdll/p...


That's an interesting data point. I wonder if there is a hard technical reason why that logic could not be added to WINE, or if the WINE maintainers made a decision not to implement similar functionality.


There is not a hard technical reason, just different goals. WINE is a compatibility layer to run Windows apps, and thus most improvements end up fixing an issue with a particular Windows application. It turns out that most Windows applications are somewhat well-behaved and restrict themselves to calling public win32 APIs and public DLL functions, so implementing 100% coverage of internal APIs wouldn't accomplish much beyond exposing the project to accusations of copyright infringement.

IIRC, there is also US court precedent (maybe Sony v. Connectix?) that protects the practice of reverse-engineering external hardware/software systems that programs use in order to facilitate compatibility. WINE risks losing this protection if they stray outside of APIs known to be used (or are otherwise required) by applications.


There's also another partial Win32 reimplementation in retrowin32, with the different goal of being a Windows emulator for the web, not for Linux or as alternate OS, at https://evmar.github.io/retrowin32/ It thus has an even more sparse path/fileapi.h implementation [2] than WINE and ReactOS. Written in Rust.

[2] https://github.com/evmar/retrowin32/blob/main/win32/dll/kern...


Yes, exactly my point - thanks for elaborating on it.


Why not use Linux with WINE and that Chicago95 theme and call it a day?


That's (part of) my point. A project like ReactOS which clones Windows down to the kernel level solves for a very small set of practical use cases which are not covered by real Windows, or Linux+WINE.

It's worth noting that 30 years ago, there was a definite advantage to an open source operating system which could reuse proprietary Windows drivers - even Linux had a mechanism for using Windows drivers for certain types of hardware. Nowadays, Linux provides excellent support for modern PC hardware with little to no tinkering required in most cases. I have seen many cases where Linux provided full support out-of-the-box for a computer, whereas Windows required drivers to be downloaded and installed.


it causes you physical pain to say "NDIS", too?


I think using WINE over Linux has won as the option to consider if you want to run Windows applications on a non-Windows OS without loading Windows into a VM.


> accept the lack of support for modern software

Running MS SQL 2008 R2 and MS Server 2016 in production here.

What "modern software support" do I lack here?


> What "modern software support" do I lack here?

There is a growing list of software that which has discontinued support for Windows 10 on the latest versions (or the Server versions thereof). I'm not sure if your example of running a ~16 year old version of SQL Server on Server 2016 demonstrates.

To my original post - if you only need to run an old version of a software package, then an old version of Windows is fine. Just because something is old, it doesn't mean that it is not useful.


Software updates?


The system runs only one app and does not serve public internet content - it does not get any updates at all, only this one app is updated every few month.

We do not need updates here?


software only ever gets better


Sigh, I hate to agree with you. On a slight tangent, I was exploring what file system I could use safely with different OSes, so that I could keep my personal data on it and access (or add to it) from other OSes, and incredibly NTFS is the only feature rich cross-platform filesystem that works reliably on all the major OSes! None of the open source solutions - ZFS, Btrfs, Ext etc. work reliably on other OSes (many solutions to make them cross-platform or still in beta, for years now). It's the Windows effect - open source developers are putting so much effort into supporting windows tech because of it's popularity, that unknowingly they are also helping it make even more entrenched, to the detriment of better open source solutions.


Last time I looked at this, I think I determined that exFAT also had reasonable support for Windows, Linux, and MacOS? I guess it might not be "feature rich", but it's at least suitable for a USB drive or something. This also isn't a counterpoint to your argument that Windows tech is better supported given its origins, but it might be useful for some people depending on their intended use.


That's a good tip, and I do use exFat on some pendrives. But due to the lack of journaling, and its buggy performance on macOS ( https://www.linkedin.com/pulse/exfat-file-system-save-henk-s... ) I wouldn't recommend it for long-term use on any fixed drives with data you care about. My research lead me to conclude that NTFS implementations are the least buggiest non-native file systems on Linux and macOS.


Interesting, I wasn't aware of the exFAT issues on MacOS. I can't remember exactly the last time I tried to use a USB drive like this, so it's possible that it might be further back than I thought and it was when NTFS didn't have write support out of the box for me on Linux.


Yes, interestingly I remember buying portable hard drives 20 years ago that were formatted as FAT or some variant (I don't remember which one exactly).

Last time I bought a portable hard drive it was formatted as NTFS.


If MS abandons WinNT, then people will likely continue to use the existing versions of Windows which are out there for any existing software (just as people continue to use MS-DOS and Win 9x for old games and software).

As for new software - I think it's open to debate just how much new Win32 software will be created after a hypothetical abandonment by Microsoft of Windows.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: