Hacker Newsnew | past | comments | ask | show | jobs | submit | fluoridation's commentslogin

Don't forget Windoze.

In french we have Windaube (pronunced Windob).

Daube is a slang word for something of low quality.


> Daube is a slang word for something of low quality.

Which is fun because it's also a really delicious dish from Provence (south of France) made with beef that has been marinated for multiple hours in red wine.


Don't forget Winblows

Another oldie

"If you play the Win98 CD backwards, it summons Satan. It's worse when you play it forwards - it installs Windows"

Ah, good times... :-)


I have a "quotes.txt" from slashdot days with some MS jabs in it:

> Last week, I left my 2 XP CDs on my dashboard in plain view. Someone broke into my car and left 2 more.

> The day Microsoft makes a product that doesn't suck is the day they make a vacuum cleaner.

> A Microsoft Certified Systems Engineer is to computing what a McDonalds Certified Food Specialist is to fine cuisine

Juvenile some might say, but they still makes me giggle.


I had to reinstall win98 so many times I still remember the pirate key k4hvdq9tj96crx9c9g68rq2d3 by heart

good times :)


I guess I was more of the FCKGW generation. :)

IIRC with Windows 98 you could just use any product key you had on as many machines as you wanted since there was no activation or real phoning home capabilities. So most likely your whole friend group would be using the same serial that was copied off your uncle's old gateway.

Ah, FuCK Gates, William.

I think there were at least three other commonly used codes, but this one was by far the most popular.


I'm pretty sure 000-0000000 worked (at least on windows 95)

FuCKinG Windows

Outbreak Express!

It was "Outhouse Express" and "GruntPage" for me in the late 90s. I still use these for software I find particularly irksome, for example Conscrewence from AtlASSian.

It was always "Microshit" to me

I always like Wangblows

Internet exploder

Internet Exploiter

In Polish we used to say "Winzgroza" (win terror)

in Italy it was WinZozz (zozzo = dirty)

But then you're putting data that used to be on RAM on storage, in order to keep copies of stored data on RAM. Without any advance knowledge of access patterns, it doesn't seem like it buys you anything.

Every time I've ran out of physical memory on Linux I've had to just reboot the machine, being unable to issue any kind of commands by input devices. I don't know what it is, but Linux just doesn't seem to be able to deal with that situation cleanly.

The mentioned situation is not running out of memory, but being able to use memory more efficiently.

Running out of memory is a hard problem, because in some ways we still assume that computers are turing machines with an infinite tape. (And in some ways, theoretically, we have to.) But it's not clear at all which memory to free up (by killing processes).

If you are lucky, there's one giant with tens of GB of resident memory usage to kill to put your system back into a usable state, but that's not the only case.


Windows doesn't do that, though. If a process starts thrashing the performance goes to shit, but you can still operate the machine to kill it manually. Linux though? Utterly impossible. Usually even the desktop environment dies and I'm left with a blinking cursor.

What good is it to get marginally better performance under low memory pressure at the cost of having to reboot the machine under extremely high memory pressure?


In my experience the situations where you run into thrashing are rather rare nowadays. I personally wouldn't give up a good optimization for the rare worst case. (There's probably some knobs to turn as well, but I haven't had the need to figure that out.)

Try doing cargo build on a large Rust codebase with a matching number of CPU cores and GBs of RAM.

I believe that it's not very hard to intentionally get into that situation, but... if you notice it doesn't work, won't you just not? (It's not that this will work without swap after all, just OOM-kill without thrashing-pain.)

I don't intentionally configure crash-prone VMs. I have multiple concerns to juggle and can't always predict with certainty the best memory configuration. My point is that Linux should be able to deal with this situation without shitting the bed. It sucks to have some unsaved work in one window while another has decided that now would be a good time to turn the computer unusable. Like I said before, trading instability for marginal performance gains is foolish.

No argument there. I also always had the impression that Linux fails less gracefully than other systems.

Mmh... What do you mean by percentage? Over the amount transacted per day, or over the total supply?

You're proposing that every porn site on the planet pings a user's government's API to see if they're adult or not? In other words, that any random site is able to contact hundreds of APIs.

Absolutely, yes. They don’t ping to see that you are of age, but that the random challenge generated by your ID checks out.

It doesn't sound simple. Now there needs to be some kind of pipeline that can route a new kind of information from the OS (perhaps from a physical device) to the process, through the network, to the remote process. Every part of the system needs to be updated in order to support this new functionality.

It's not simple, but it's also not new. mTLS has allowed for mutual authentication on the web for years. If a central authority was signing keys for adults, none of the protocol that we currently use would need to change (although servers would need to be configured to check signatures)

and is it easier to implement id checks for each online account that people have, had, and will ever have in the future?

parents need to start parenting by taking responsibility on what their kids are doing, and government should start governing with regulations on ad tech, addictive social media platforms, instead of using easily hackable platforms for de anonymization, which in turn enable mass identity theft.


>and is it easier to implement id checks for each online account that people have, had, and will ever have in the future?

No, I think both ideas are bad.


Well, how is "Windows"?

At least it's figurative

So is this, isn't it? This is packaging material made from mycelium, not from literal mushrooms.

>it is a very good example of why religious authority should be in the same hands as secular power

Did you forget a "not"?


>for applications like AI, even using system RAM is often considered too slow, simply because of the distance to the GPU

That's not why. It's because RAM has a narrower bus than VRAM. If it was a matter of distance it'd just have greater latency, but that would still give you tons of bandwidth to play with.


You could be charitable and say the bus is narrow because it has to travel a long distance and this makes it hard to have a lot of traces.

It's not. It's narrow even between the CPU and RAM. That's just the way x86 is designed. Nvidia and AMD by contrast have the luxury of being able to rearchitect their single-board computers each generation as long as they honor the PCIe interface.

It is also true that having a 384-bit memory bus shared with the video card would necessitate a redesigned PCIe slot as well as an outrageous number of traces on the motherboard, though.


Traditionally, the width of the GPU memory interfaces was many times greater than that of CPUs.

However the maximum width in consumer GPUs, of up to 1024-bit, has been reached many years ago.

Since then the width of the memory interfaces in consumer GPUs has been decreasing continuously, and this decrease has been only partially compensated by higher memory clock frequencies. This reduction has been driven by NVIDIA, in order to increase their profit margins by reducing the memory cost.

Nowadays, most GPU owners must be content with a memory interface no better than 192-bit, like in RTX 5070, which is only 50% wider than for a desktop CPU and much narrower than for a workstation or server CPU.

The reason why using the main memory in GPUs is slow has nothing to do with the width of the CPU memory interface, but it is caused by the fact that the GPU accesses the main memory through PCIe, so it is limited by the throughput of at most 16 PCIe lanes, which is much lower than that of either the GPU memory interface or the CPU memory interface.


ThreadRipper has 8 memory channels versus 2 for a desktop AMD CPU. It's not an x86 limitation.

"x86" as in the computer architecture, not the ISA. Why do you think they put extra channels instead of just having a single 512-bit bus?

The memory interface of CPUs is made wider by adding more channels because there are no memory modules with a 512-bit interface. Thus you must add multiples of the module width to the CPU memory interface.

This has nothing to do with x86, but it is determined by the JEDEC standards for DRAM packages and DRAM modules. The ARM server CPUs use the same number of memory channels, because they must use the same memory modules.

A standard DDR5 memory module has a width of the memory interface that is of 64-bit or 72-bit or 80-bit, depending on how many extra bits may be available for ECC. The interface of a module is partitioned in 2 channels, to allow concurrent accesses at different memory addresses. Despite the fact that the current memory channels have a width of 32-bit/36-bit/40-bit, few people are aware of this, so by "memory channel" most people mean 64 bits (or 72-bit for ECC), because that was the width of the memory channel in older memory generations.

Not counting ECC bits, most desktop and laptop CPUs have an 128-bit memory interface, some cheaper server and workstation CPUs have a 256-bit memory interface, many server CPUs and some workstation CPUs have a 512-bit memory interface, while the state-of-the-art server CPUs have a 768-bit memory interface.

For comparison, RTX 5070 has a 192-bit memory interface, RTX 5080 has a 256-bit memory interface and RTX 5090 has a 512-bit memory interface. However, the GDDR7 memory has a transfer rate that is 4 to 5 times higher than DDR5, which makes the GPU interfaces faster, despite their similar or even lower widths.


That's slower than just running it off CPU+GPU. I can easily hit 1.5 tokens/s on a 7950X+3090 and a 20480-token context.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: