Hacker Newsnew | past | comments | ask | show | jobs | submit | koala_man's commentslogin

I will try to remember him for his far too relatable Dilbert comics about corporate office life, and not his, uhm, later work.

One of my favorite strips was Dilbert in the 90s only being given a 286 PC for 3D rendering work, with the boss saying "besides, how often will you do 3D rendering in your career?"

Dilbert replies "Once, if I hurry"


I once saw a high resolution CPU graph of a video playing in Safari. It was completely dead except for a blip every 1/30th of a second.

Incredible discipline. The Chrome graph in comparison was a mess.


Safari team explicitly targets perf as a target. I just wish they weren't so bad about extensions and adblock and I'd use it as my daily driver. But those paper cuts make me go back to chromium browsers all the time.


I find Orion has similar power efficiency but avoids those papercuts: https://kagi.com/orion/


> The intention was to prevent tooth decay by regulating candy intake.

Yes, but crucially not by reducing candy intake.

The result of the study was that the amount of sugar didn't particularly matter, but frequency of intake and stickiness did.


I'm Norwegian and concluded that she meant when ordering, such as for a choice of potatoes or pasta.

This makes sense since the context is translation for tourism.

Otherwise, the normal, casual way would be "kan du sende potetene?" i.e. "could you pass the potatoes?", lit. "can you send the potatoes?"

(This assumes it wasn't physically possible to simply reach across people to grab it yourself with what's known as "the Norwegian arm")


> Meta went from 2K to 10K+ from 2018 to 2025

Facebook rebranded to Meta in October 2021


Good call. O(N^2) is the worst time complexity because it's fast enough to be instantaneous in all your testing, but slow enough to explode in prod.

I've seen it several times before, and it's exactly what happened here.


We just had this exact problem. Tests ran great, production slowed to a crawl.


I was just helping out with the network at an event. Worked great in testing, but it failed in production due to unicast flooding the network core. Turns out that some of the PoE Ethernet switches had an insufficiently sized CAM for the deployment combined with STP topology changes reducing the effective size of the CAM by a factor of 10 on the larger switches. Gotta love when packet forwarding goes from O(1) to O(n) and O(n^2)! Debugging that in production is non-trivial as the needle is in such a large haystack of packets so as to be nearly impossible to find in the output of tcpdump and wireshark. The horror... The horror...


First big project I worked on a couple of us sped up the db initialization scripts so we could use a less trivial set of test data to stop this sort of shenanigans.

Things like inserting the test data first and turning on constraints and possibly indexes afterward.


This question is logically sensible but considered emotionally abhorrent. If you haven't been tested for autism you should consider taking a quiz.


Please don't comment like this, no matter how bad the comment you're replying to. The guidelines apply to everyone equally:

https://news.ycombinator.com/newsguidelines.html


My bad. I had been reading about how there's a frustration in the community that allistic people refuse to explain why such statements are wrong, and instead just repeat "you know what you did!" to people who genuinely don't.

I tried to be the one who didn't do that, but missed the mark. I'd delete it if I could.


Only 1% of the population has autism. Presenting autism as a considerable possibility for trollish behavior isn't much different than what the parent commenter did.


"questions I don't want to think about are trollish"


now, confirmed.


What was the question? The rampant flagging here is quite annoying.


My original comment said:

> But your child will die and that's a fact. Is it only ok for it to die after you?


Is there more context to this question? I couldn't read the article because of the pay wall. But in isolation, this is a dumb question. All decent parents want their child to live as long as possible and be as healthy as possible. Is there something deeper you were trying to get at?


[flagged]


> that the child will die at some point

So what? So a father shouldn't celebrate medical advances that mean their kid doesn't have to die after a week? And if it does, they should just be like "Ah, that's life!"


I never said any of this


I didn't say you did. I was trying to understand your point, and so was inferring what you could possibly have meant with your original comment.


Oh, sorry. I definitely think a father can (should?) celebrate medical advancements like this, and definitely shouldn't undermine death like "Ah, that's life". My point is that people often worry about their children's death when they themselves are still alive. Death seems okay if it's when they don't get to see it


Death of someone whose potential was largely realized is a very different thing than the death of someone who never got a chance at the same.

I would be deeply unhappy to learn that my children won't live to old age.

Also witnessing the death of a loved one is obviously traumatic. People grieve their parents dying of old age.


[flagged]


[flagged]


You are trying to frame this as pure “logic” but if you had read a single book on ethics or even philosophy you would see that’s not the case. You are basically asking “but why is good better than bad?” Acting as if you are logical but failing basic premises of logic or ethics. Any ethical framework is going to have axioms, typicslly these axioms are things that are inarguable for any person, namely its better to live than die, or to reduce suffering, etc. using basically any ethics system and pure logic you will quickly reach a conclusion that a baby living is better than one dying.

This really has nothing to do with the inevitability of death. Death is inevitable, however there is a difference between a child dying and an elderly person dying. A child has potential, they have not lived their lives. A child has not actually lived the full basic human experience, they havent had a crush, or fallen in love or married or had children or had any great successes or failures or close friends or anything, these things everyone does. An older person has, they are not a pure soul who hasnt experienced life. After 70 years you can be sad for the individual passing but happy that they have experienced life. This is why when a parent has a child they arent sad that their child will die in 80 years, but are devasted if they die at a week. The child never even had a chance. When you actually have a child, its an emotional and fulfilling experience, and to have that torn out so early is damaging.

From an empathy and emotional pov these things are so extemely basic and foundational aspects of being a human, a 10 year old from any culture on earth can undersrand this with no difficulty. And any person with even a passing familiarty with logic, ethics or philosophy will dismiss you as being earnest. Which is why people are assuming you are a troll.


The medical profession allocates scarce resources based on the amount of quality-adjusted life years it will bring.

Humans see value in living life, so cutting a life short is worse than a life that would be ending soon anyway.


Yes. Yes it is.


A parent's obligation is to try and do everything they can to make their child's life good. I think most people would agree that living more than a week is a good thing.


The article starts off saying that if you want people with real full stack experience, from kernel to UX, you need to grow it.

It goes on to say that it's hard to find and develop expertise for low level software like hypervisors.

What's the connection between the topics? It feels like two different rants.

If it's difficult to find kernel developers then wouldn't it help to not require them to also know web UX?


> If it's difficult to find kernel developers then wouldn't it help to not require them to also know web UX?

That means hiring two people, and in $current_year, companies expect one person to know everything. Sysadmin, backend programmer, frontend programmer, designer and a DBA used to be different people not that long ago, now they expect one person to do all that... + it seems they want kernel development experience now.


Before they were multiple people they were one person.

A single person can in fact write a program for a computer.


Sure, some C code, some html, a table here, a colspan there, and you can have a website made by a single person... if we want a website to look like it was made on an 1980s computer by a single person.


If you're not that person, it's fine. Some people still just use notepad and write html like it's 1999. Other have both kernel experience, and have picked up react at some point in the past ten years. Plus LLMs write css these days, so no colspan needed.


Is an LLM gonna write your kernel too?


At the rate LLMs are improving, that certainly seems like a possibility, but until they do, why would you need one for that? Kernel C makes sense. CSS is the problem here.


It transplants to other eras. A Webmaster managed the server, code and graphics where it looks like sites from the eras.

People who came after you would write it vb6 people who came latter bootstrap.js or use material icons.


Or, you can have a decent website made by a single person. It's not that hard to learn basic HTML5, enough ARIA semantics to know not to use ARIA, a programming language with decent synchronisation primitives that supports CGI, an SQL dialect and the principles of relational database design, enough JavaScript to use MDN, enough CSS to use MDN, the basics of server maintenance, TLS certificate provision, and DNS.

If you want to do your own networking, or run email, that's a whole 'nother specialism; but just running a website is easy enough for one person to do.


On the 1990s maybe, on the 1980s hardly. :)


Good, that's the way it was until the splitting of roles for commodification. A programmer is more like the Renaissance man who makes it a goal to do everything from different disciplines than a drone who has been trained to do one thing and can only be trusted to do one thing.


It's not commodification, it's acknoledging that tech got exponentially more complex over the decades.

just think of your favorite video game character in 2000 and then one in the 2020's and consider how much tech is needed to render, animate, light, and conceptualize it. in 2000 this was all done by maybe one artist and one gamedev, probably making a character with some hundreds of polys at best. now that artist has a pipeline of riggers, material artists, animators, and concept artists, while that single dev became a graphics programmer, gameplay programmer, tech artist, and build engineer.


My point was unnecessarily they split the roles. An artist can cover concept and materials. A programmer can do gameplay, graphics, rendering and builds. In fact having people who understand the entire project makes for a better project.

It's like moving from custom built cars to the assembly line where someone's job is putting in one screw. I understand it's cheaper/faster because you can hire anyone unskilled for cheap but cars were all suppose to be identical. Software should be unique (if not just copy the last thing built) but I guess when it comes to major games things are more of factory throwing millions of pixels of characters at existing game engines while copying gameplay of successful games. That's why games are shovelware these days like a netflix original.


But we now expect a single person to design the engine, the bodywork, both aesthetically and technically, make an engine, actually make all those parts, assemble them together, pain the car and test it.

Jack of all trades, master of none. This is why we need clusters, "stacks" and "clouds" on the server side and gigabytes of ram on the client side + many megabytes transfered, just to show one simple weather forecast website that gives the user the same amount of information as a WAP site did on a five line mobile phone back in GPRS times.


Sure, let me explain it a bit better. It's more like in the sense of the "stack" is very deep now. Clearly, we have/hire Xen/hypervisors specialist, and we do not ask them to be CSS experts. However, deeper in the stack (at lower levels) harder it is to find them, because of the lack of expertise in universities and/or appeal of doing such job.

And if you find or train those low-level/system-oriented people, they also need to understand how a feature they build will be exposed functionally to a user (and why they need it in the first place). Because things are not make into thin-air but required to work in a bigger picture (ie: the product).


It feels like we're back in 1900 when anyone's clever idea (and implementation) can give huge performance improvements, such as Ford's assembly line and Taylor's scientific management of optimizing shovel sizes for coal.


yes, it also feels like we are going to lose our just-in-time global shipments of anything to anywhere any day now. It will soon feel like 1900 in other ways.


Hope we don't get 1914 again, too.


We’ll have to raise our own chickens too…


I'm surprised there are no UTF-8 specific decode instructions yet, the way ARM has "FJCVTZS - Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero"


FJCVTZS isn't really as specific to Javascript as the name suggests, it actually copies the semantics of an x86 instruction, which JS took its semantics from.

JS runtimes do use it but it's useful anywhere you need to do what x86 does, which obviously includes running x86 binaries under emulation.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: