Hacker Newsnew | past | comments | ask | show | jobs | submit | robinsonb5's commentslogin

> Surely you wouldn't expect -1 == 0 to evaluate to true.

I wouldn't, no - but that's exactly what's happening in the test case.

Likewise, I wouldn't expect -1 == 1 to evaluate to true, but here we are.

The strict semantics of the new bool type may very well be "correct", and the reversed-test logic used by the compiler is certainly understandable and defensible - but given the long-established practice with integer types - i.e "if(some_var) {...}" and "if(!some_var) {...}" - that non-zero is "true" and zero is "false", it's a shame that the new type is inconsistent with that.


Indeed. I think what's really needed is some way to mark pages as "required for interactivity" so that nothing related to the user interface gets paged out, ever. That, I think, would go at least some way towards restoring the feeling of "having a computer's full attention" that we had thirty years ago.

There is, mlock() or mlockall(), but it requires developer support. I wish there is an administrator knob that allows me to mark whole processes without needing to modify them.

There is cgroup memory.min

Seems the applications can call mlockall() to do this

An Electron app would mark its entire 2GB as required for interactivity. If you run 4 electron apps on an 8GB system you run out of memory.

I don't mean interactivity within apps, per se - I mean the desktop and underlying OS, so that if an electron app goes unresponsive and eats all the free RAM the window manager can still kill it. Or you can still open a new terminal window, log in and kill it. Right now it can take several minutes to get a Linux system back under control once a swapstorm starts.

Linux doesn't really have any distinction between the desktop and underlying OS components in userspace and anything else in userspace. Linux is quite userland-agnostic, and distros have traditionally mixed user software with distro-managed software. You shouldn't use `sudo` to install software by default, your package manager should allow installing software for just your user. Software installed for the system could then be the only software allowed to mark itself as required for interactivity. You could do that manually to other software if you had root access, but "normal" user software installs with the package manager couldn't do so since they wouldn't get root access.

That'd require some new capabilities added, and some substantial shifts in how distro maintainers & users operate, so it's extremely unlikely. It's much closer to how things like Android operate, though still not quite as secure as giving each application its own user & dedicated storage for data.


Alt+[SysRq,f]

Or Alt+[SysRq,h] for help


No effect, captain.

In 30 years of using desktop Linux I've never been able to interrupt a swapstorm. The only way out is long-press the power button.


It always works for me, including on SBCs over TTL serial port. Always. Never had a situation where invoking the OOM killer by sysrq didn't solve the swap storm.

On your system it probably doesn't work at all, not even when idle. It can only be 3 reasons:

  - Your kernel doesn't have it. You probably have a generic kernel provided by the package manager without this feature enabled. I can't really help you here. I always build my own kernel from source (+ patches).

  - Something hijacks your keyboard input. If you have a console already opened and logged in as root, you can "echo f > /proc/sysrq-trigger". Else, you can try setting up a permanent serial console and send the command from another computer. CTRL+2 then the command letter (f). The magic sysrq key over serial console is a separate kernel option that needs to be enabled.

  - You're doing it wrong. On laptop keyboards, keyboards less than 104/105 keys, sysrq is one of the first keys to be removed. Getting it pressed with Fn combinations... Good luck with that!

> In my experience error diffusion often gets muddy due to dot gain

Absolutely - there's a reason why traditional litho printing uses a clustered dot screen (dots at a constant pitch with varying size).

I've spent some time tinkering with FPGAs and been interested by the parallels between two-dimensional halftoning of graphics and the various approaches to doing audio output with a 1-bit IO pin: pulse width modulation (largely analogous to the traditional printer's dot screen) seems to cope better with imperfections in filters and asymmetries in output drivers than pulse density modulation (analogous to error diffusion dithers).


Traditional litho actually uses either lines in curved crosshatch patterns or irregular stippling. Might be doable using an altered error-diffusion approach that rewards tracing a clearly defined line as opposed to placing individual dots or blots.


> and ignores the window manager decorations.

That's because Gtk4 does "client side decoration". That has the advantage (or otherwise, depending on your point of view!) that the application can now place custom widgets in the title bar, and the disadvantage that when apps do that, the part of the title bar available for dragging windows around becomes significantly smaller.

My main objection to client-side decoration is that middle-clicking a window's title bar to push it to the back no longer works. (Plus, for those of us with eyes that aren't as young as they once were, it's now much harder to choose a window border style that clearly indicates which window has focus.)


My biggest problem (of many) with client side decorations is that now when your program crashes, you can't just hit the close button to have the window manager kill it, because the process responsible for drawing and responding to the close button has crashed.

The trick is to avoid software using the newer gtk versions.


A hugely entertaining blog post, despite subject matter that could easily result in very dry reading.


You might an internet search for "Array mbira" entertaining.


For me the liberty question you raised there isn't so much about whether the business has become large, as whether it's become "infrastructure". Being denied service by a cake shop may very well be distressing and hurtful, but being suddenly denied service by your bank, your mobile phone provider, or even (especially?) by gmail can turn your entire life upside down.


Yes I’d tend to agree with you there. But being able to define that tipping point where something becomes “infrastructure” even if it’s still privately owned and isn’t a monopoly, is a difficult problem to solve.


If they were sending just one per month I might actually read them occasionally. It's the three a day from the likes of aliexpress that get deleted without a second glance.

But yes, you're absolutely right - "no raindrop considers itself responsible for the flood".


That marketing team only sends 1 email a month, but the 25 other marketing teams at the same company also only send 1 email a month.


Perhaps because the level of respect that Windows has for its users has dropped with each successive version?

Not to mention bloat: I have a keyboard with a dedicated calculator button. On a machine with Core i5 something or other and SSD it takes about 2 seconds for the calculator to appear the first time I push that button. On the Core 2 Duo machine that preceded it, running XP from spinning rust, the calculator would appear instantly - certainly before I can release the button.

But also WinXP was the OS a lot of people used during their formative years - don't underestimate the power of nostalgia.

Also, for some people the very fact that Microsoft don't want you to would be reason enough!

Personally if I were into preserving old Windows versions I'd be putting my effort into Win2k SP4, since it's the last version that doesn't need activating. (I did have to activate a Vista install recently - just a VM used to keep alive some legacy software whose own activation servers are but a distant memory. It's still possible, but you can't do it over the phone any more, and I couldn't find any way to do it without registering a Microsoft account.)


Win2003 Enterprise does NOT need activation either. It runs smooth offline.


There are tools out there (like UMSKT) that can activate MS software from that era fully offline too. They cracked the cryptography used by the activation system and reimplemented the tool used for phone activation, so you can “activate by phone” using UMSKT instead of calling MS.


But you do NOT need to.. Read about Volume Licencing. You just enter the KEY and vioala....


[flagged]


Your comment reminds me of that rule from baseball that says something about batters and hats, or maybe it was about helmets or something, it doesn't really matter though because the only point of this sports ball rambling is to distract you from noticing that my "nuh uh" has no substance. Did it work?


This is more than a bit out of place on HN in my experience, please, try to engage politely.

I’m not sure what I can say that will qualify as more than “nuh uh” to you, shy of getting a Core 2 Duo running with XP and the same keyboard as OP. That isn’t possible at the moment, is there anything else I could do?


I admit you got me mildly anmoyed with the sports nonsense, sorry about that.

Anyway, you're talking about reaction time, which isn't actually relevant. The time between an action (pressing a button, or flipping a switch) and seeing the result happen isn't the same as the time it takes you to re-act to that something. Flip a light switch, does the light turn off instantly, or does it take a full third of a second? I guarantee you can tell the difference. 300ms of latency is actually huge and easily perceptible, even if it's faster than you can react.


300ms is a lot of time, especially if the calculator.exe was in disk cache already.


300 ms is a long time on a computer, definitely. Just, the autistic side of me has to speak up when it’s wildly unrealistic glorification of the past.

Keypress duration is likely much less than 300 ms, top Google result claims 77 ms on average. And that’s down and up.

I see it being in cache already as sort of game playing, i.e. we can say anything is instant if we throw a cache in front of it. Am I missing something about caching that makes it reasonable? (I’m 37, so only 18 around that time and wouldn’t have had the technical chops to understand it was normal for things to be in disk cache after a cold boot)


Okay, let's say the cache is cold and you're on an old clunky spinning rust 5400 RPM hard drive. Do the math. How long will it take, worst case, for the platter to spin to where calc.exe is stored?


For a 5400 RPM drive, worst-case rotational latency is one full rotation: 5400/60 = 90 rev/sec, so ~11ms. Average is half that (~5.5ms). If you also need to seek (yes, we'll definitely need to move on both axes in the worst case scenario requested, likely all the time), 2006-era datasheets show average seek around 11-12ms, with full-stroke seeks around 21-22ms. So worst case total access: ~33ms.

Seagate Momentus 5400.3 manual (2005): https://www.seagate.com/support/disc/manuals/ata/100398876a....

Hitachi Travelstar 5K120 (2006):http://www.ggsdata.se/PC/Bilder/hd/5K120.pdf

WD Scorpio (October 2007): https://theretroweb.com/storage/documentation/2879-001121-a1...


If you had used calculator earlier that uptime, it wouldn't be crazy. It's a small exe.


Why is it impossible?


Tl;dr reaction time, 300 ms is the golden rule for reaction speed, and apparently there was actually a sports medicine study that came to that #. I was surprised to see that, 300 ms comes up a lot in UX as “threshold of perceptible delay” but it was still surprising to see.


I'm not sure why human reaction time is relevant here, since what I'm talking about isn't the time it takes me to respond to a stimulus but the time it takes the computer to respond to a stimulus.

I do do still have both computers set up side-by-side (legacy data from an old business), and the keyboard in question was a Microsoft Comfort Curve 2000 (the calculator button wasn't a proper key, it was one of those squidgy extra keys so beloved of multimedia keyboards, so not as fast to operate as a proper key.)

Anyhow, the point (arguably hypberbolic as it may have been) wasn't about reaction time per se, it was about the older calculator app - and by extension much of the rest of the OS - being a much simpler and less bloated piece of software, and running it on faster-than-contemporaneous hardware makes for a sense of immediacy which is sorely lacking in today's world of web apps.

I'd be very interested to know to what that 300ms "threshold of perceptible delay" applies. You might not notice a window taking 300ms to open - but I'd be willing to bet that when you're highlighting text with the mouse or dragging a slider, you'd be very aware of the UI lagging by nearly 1/3 of a second.


This is a lot of words that say "yeah, I was hyperbolic, but it was directionally correct." I do appreciate the candor but its a bit late, as you see by the text color of my comments. Many people do the same thing as you, no worries, I appreciate you validating my quixotic self-destructive work.


I'm sorry you're being downvoted - for the record I've upvoted since it's interesting, even if we disagree in some aspects.

Since I still have the machine in question here, and I'm now interested enough to try and get some rough measurements, I've just videoed it with my phone (30fps video) and done some frame counting, both from a cold boot with nothing cached, and also a repeated launch.

Firstly from a cold boot:

It's hard to tell exactly when the keypress registers, but I believe what I'm seeing is the key being pressed, two frames later the hourglass appears, two frames after that the calculator appears. (The TFT screen will likely be adding at least one frame lag, but let's ignore that for now.) So that's somewhere between 166 and 200ms for a cold launch.

If I close the app and repeat, there's now just one frame between keypress and hourglass, and just one more frame between hourglass and the app appearing, so now nearer 100ms.

Looking at the videos my finger is off the key the first time the app appears, but not the second time - though if I made a special effort to release the key as quickly as possible I now think I could probably just about beat it.


I was curious, so did a quick web search, which claims that 300ms is the average reaction time and plenty of people run faster than that.

But I think the question was the other way: Why couldn't calc.exe launch in 300ms?


300 ms is way longer than they budgeted; separately, I was alive then and it's a ridiculous claim, like, it takes a general bias we all have towards seeing the past with rose-colored glasses and takes it farcically far.

Don't want to clutter too much, I'm already eating downvotes, so I'll link:

https://news.ycombinator.com/item?id=46642003


I have Windows 95 on a Pentium 120 MHz and calc.exe is instantaneous enough that it's probably much less than 300ms to launch.

XP's calculator is hardly any different than 95. It's easy to believe that launching it on a Core 2 Duo of all things is also instant.


You’re both kind of right.

On the average consumer hardware at launch, 95 and XP were slow, memory hungry bloats. In fact everything that people say about Windows 11 now was even more true of Windows back then.

By the end of the life of Windows 95 and XP, hardware had overtook and Windows felt snappier.

There was a reason I stuck with Windows 2000 for years after the release of XP and it wasn’t because I was too cheep to buy XP.


The Doherty threshold is 400 ms. That’s the threshold which you start impacting users focus, and flow.

Back in the day, we actually used to aim for that as a user experience metric.


yeah no. Ask musicians using computers - 50 milliseconds of latency between sound and movement is generally considered unplayable, 20 milliseconds is tough, below 10ms usually is where people start being unable to tell.


You’ve fallen into the common trap of conflating reaction time with observable alignment time.

Reactions are about responding to one off events.

Whereas what you’re describing is about perception of events aligned to a regular interval.

For example, I wouldn’t react to a game of whack-a-mole at 50ms, nor that quickly to a hazard while driving either. But I absolutely can tell you if synth isn’t quantised correctly by as little as 50ms.

Thats because the later isn’t a reaction. It’s a similar but different perception.


Pressing a key to trigger an action that you will then send additional input to is an entirely different sequence of events than whack-a-mole, where you are definitionally not triggering the events you need to respond to.


I'm not talking about latency (though I don't fully agree with your statement but I've covered that elsewhere). I'm talking about the GP's comparison of reactions vs musicians listening to unquantised pieces.

You simply cannot use musicians as proof that people have these superhuman reaction times.


But here we're talking about not being able to notice whether calc.exe opens in less than 300 milliseconds, not how fast we can react to it opening? It's the same thing with audio latency (and extremely infuriating when you're used to fast software where you can just start typing directly just after opening it without having to insert a pause to cater to slowness)


No it's not the same thing with music latency. For one thing, music is an audio event where as UI is a visual event. We know that music and audio stimuli operate differently.

And for the music latency, you can here where the latency happens in relation to the rest of the music piece (be the rock music, techno, or whatever style of music). You have a point of reference. This makes latency less of a reaction event and more of a placement event. ie you're not just reacting to the latency, you're noticing the offset with the rest of the music. And that adds significant context to perception.

This is also ignores the point that musicians have to train themselves to hear this offset. It's like any advanced skill from a golf swing to writing code: it takes practice to get good at it.

So it's not the same. I can understand why people think it might be. But when you actually investigate this properly, you can see why DJs and musicians appear to have supernatural senses vs regular reaction times. It's because they're not actually all that equivalent.


This, 100%.

I've seen the same scenario - someone with limited vision, next to no feeling in his fingertips and an inability to build a mental model of the menu system on the TV (or actually the digi-box, since this was immediately after the digital TV switchover).

Losing the simplicity of channel-up / down buttons was quite simply the end of his unsupervised access to television.


Channel up/down doesn't scale to the amount of content available now. It was OK when there were maybe half a dozen broadcast stations you could choose from.


This is ahistorical. If you had cable, you had 100+ channels, and there was no difficulty in numbering them and navigating them through the channel up/down buttons. There weren't even only half a dozen broadcast stations in any city in the US at least since the 50s - you at least had ABC, NBC, CBS and PBS in VHF, and any number of local and small stations in UHF.

The thing that didn't scale was the new (weird, not sure why) latency in tuning in a channel after the DTV transition, and invasive OS smart features after that. Before these, you could check what was on 50 channels within 10 seconds; basically as fast as you could tap the + or - button and recognize whether something was worth watching; changing channels was mainly bound by the speed of human cognition. I think young people must be astounded when they watch movies or old TV shows where people flip through the channels at that speed habitually.


> new (weird, not sure why) latency in tuning in a channel after the DTV transition,

Because with analog signals the tuner just had to tune to the correct frequency and at the next vertical blank sync pulse on the video signal the display could begin drawing the picture.

With digital, the tuner has to tune to the correct frequency, then the digital decoder has to sync with the transport stream (fairly quick as TS packets are fairly small) then it has to start watching for a key frame (because without a keyframe the decoded images would appear to be static) and depending upon the compression settings from the transmitter, keyframes might only be transmitted every few seconds, so there's a multi-second wait for the next keyframe to arrive, then the display can start drawing the pictures.


I still watch OTA DTV. Tuning is instant. Maybe it's slower if you are on cable and there's a few round-trip handshakes to authenticate your subscriber account.

I'm pretty sure there's a lot of round-tripping going on with the streaming services I use through my dongle. They're always slow to both start the app and to start any actual streaming.


That's only if you want to watch specific things; some people just turn it on for entertainment, and change channels to have a spin at the roulette wheel for something better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: