I don’t think this is true. Advancements in technology often make things possible that previously were not at any price. Engines, for example, are better than ever in part due to computer modeling that would have been impossible in the 70s. Same deal with aerodynamics, safety features, and a million other things. In the 70s, you couldn’t have those things for any price. They required decades of development in other sectors to open possibilities for automobiles.
Most technology on cars existed years or decades before the became commonplace and affordable enough to use outside racing or exotic cars.
Airbags were patented in the 1950s. Modern ABS in 1971. Fist electronic fuel injector in 1957. You could take the Formula 1 level technology of 1970, and with enough money, apply it to a pickup truck. It would be shockingly expensive - and not as good. T
hat's my point! You are getting so much more for your dollar today, even though prices have risen faster than inflation. You are getting a multi-million dollar truck for $50k.
> You are getting a multi-million dollar truck for $50k.
You’re not, though, because that truck never did and never could exist. A modern F-150 isn’t a 70s F1 car made cheap by new tech. This isn’t something you can wave away with an argument equivalent to “we put 1000 research points in the tech tree.”
When the US economy was working well, products got better and cheaper over time. Tech and increased labor productivity drove that. Now, tech and labor productivity has continued to increase, yet consumer prices have far outpaced inflation.
"A modern F-150 isn’t a 70s F1 car made cheap by new tech. "
Yes, it pretty much is. You have to consider technology in cars is moving on two seperate/distinct paths.
1: improving manufacturing processes, materials, quality, which is lowering prices over time. Megacasting aluminum car parts is an example.
2: Adding totally new complex parts and systems that cars didn't have. This is things like airbags, antilock breaks, infotainment system, catalytic converters. This adds to the total cost.
#2 is far outpacing #1, which is why prices of cars are going up faster than inflation, wages, etc.
Again, the old car comparison is demonstrably untrue. To put the same example forward, computer modeling has wildly changed and accelerated car design in ways that were impossible for any sum of money in the 70s.
I think part of why this is hard to believe is that people strongly believe in the concept that time is money. On the margins for decisions like hiring someone to mow your lawn, it is true. For large scale things, you often cannot accelerate processes no matter how much money you dump into it. A good example of this is how long it has taken China to industrialize.
To be clear also, you have to prove your point that #2 is outpacing #1. The fact that the price keeps going up is not proof as there are other explanations. The poor quality of domestic manufacturers and their bad business practices, for example.
The low n is not the only questionable thing about the study. What a big n gives you is diversity of samples and tighter confidence intervals, but it can not correct for methodological limitations. Specifically, they didn't invite any people with sleep issues or who are already sleeping under noise. Therefore the conclusion is a "duh" - if you don't require pink noise to sleep, then don't add it.
The alternative is higher n. The study makes a claim, it does not present the evidence necessary to back up that claim. Until someone does a larger study, no conclusion should be drawn.
I'm not inviting you to draw conclusions from my semi-random (but informed by years of professional thought about why people like different sounds) anecdote.
Personally, I trust the results of a sleep study, or any study on anything, by people I don’t know with questionable incentives than I do anecdotes of commenters I’ve been following for 10 years on HN, especially when they align with my own experiences, and conversations I’ve had over beers with people in industry (whatever that might be).
A lot of “science” is junk, not insofar as it’s false, but like water is wet.
Good science: there are compounds in cruciferous vegetables that appear to exert some health benefits.
Junk science: bok choy is green.
If a sleep lab is ignoring the fact of chronotypes (it’s obvious our genetic history would require some people to be predisposed to keep an eye out for toothy clawed things, and dangerous ‘others’) while most of their tribe / community are sleeping), the people who work there do so because it pays the bills, not because they’re passionate about working in the medicine / health industry at all.
I encourage people to get up and walk out if you find yourself at a service provider that doesn’t care about you. Find someone who gives a frak.
Once again, I am not suggesting you generalize from my anecdote. I like the sound of rain and sleep better with it. I have absolutely no idea how widespread this is in the population.
Lockheed, Boeing, Northrop, Raytheon, and all the others are private companies, too. NASA and others generally go through contractors to build things. SpaceX is on the dole just like them.
The satellite is built on Earth, so I’m not sure how it dodges any of those regulations practically. Why not just build a fully autonomous, solar powered datacenter on Earth? I guess in space Elon might think that no one can ban Grok for distributing CSAM?
There’s some truly magical thinking behind the idea that government regulations have somehow made it cheaper to launch a rocket than build a building. Rockets are fantastically expensive even with the major leaps SpaceX made and will be even with Starship. Everything about a space launch is expensive, dangerous, and highly regulated. Your datacenter on Earth can’t go boom.
Truly magical thinking, you say? OK, let's rewind the clock to 2008. In that year two things happened:
- SpaceX launched its first rocket successfully.
- California voted to build high speed rail.
Eighteen years later:
- SpaceX has taken over the space industry with reusable rockets and a global satcom network, which by itself contains more than half of all satellites in orbit.
- Californian HSR has spent over thirteen billion dollars and laid zero miles of track. That's more than 2x the cost of the Starship programme so far.
Building stuff on Earth can be difficult. People live there, they have opinions and power. Their governments can be dysfunctional. Trains are 19th century technology, it should be easier to build a railway than a global satellite network. It may seem truly magical but putting things into orbit can, apparently, be easier.
That’s a strange comparison to make. Those are entirely different sectors and sorts of engineering projects. In this example, also, SpaceX built all of that on Earth.
Why not do the obvious comparison with terrestrial data centers?
Now how about procuring half a gigawatt when nearby residents are annoyed about their heating bills doubling, and are highly motivated to block you? This is already happening in some areas.
From individual POV yes, but already Falcons are not that expensive. In the sense that it is feasible for a relatively unimportant entity to buy their launch services.
"The satellite is built on Earth, so I’m not sure how it dodges any of those regulations practically."
It is easier to shop for jurisdiction when it comes to manufacturing, especially if your design is simple enough - which it has to be in order to run unattended for years. If you outsource the manufacturing to N chosen factories in different locations, you can always respond to local pressure by moving out of that particular country. In effect, you just rent time and services of a factory that can produce tons of other products.
A data center is much more expensive to build and move around. Once you build it in some location, you are committed quite seriously to staying there.
Besides what all these other commenters are saying, probably many of the people running these small lunch shops in Japan are the owners, not waged employees. On top of that, that business probably isn’t viable for 8 hours per day.
> If you were open every day of the year and assume no seasonality, that means your first 49 orders every day go just to regulatory fees.
This looks crazy because it is incorrect. In your premise, that 9% profit margin includes the regulatory costs for a brick and mortar restaurant already. The only way your logic works out is if truck regulations are on average $30k more expensive than a regular building, which they almost certainly are not.
You can’t even begin to do the calculation without knowing the breakdown underlying the profit margin you cite.
How is it not general knowledge? How do you otherwise gauge if your program is taking a reasonable amount of time, and, if not, how do you figure out how to fix it?
In my experience, which is series A or earlier data intensive SaaS, you can gauge whether a program is taking a reasonable amount of time just by running it and using your common sense.
P50 latency for a fastapi service’s endpoint is 30+ seconds. Your ingestion pipeline, which has a data ops person on your team waiting for it to complete, takes more than one business day to run.
Your program is obviously unacceptable. And, your problems are most likely completely unrelated to these heuristics. You either have an inefficient algorithm or more likely you are using the wrong tool (ex OLTP for OLAP) or the right tool the wrong way (bad relational modeling or an outdated LLM model).
If you are interested in shaving off milliseconds in this context then you are wasting your time on the wrong thing.
All that being said, I’m sure that there’s a very good reason to know this stuff in the context of some other domains, organizations, company size/moment. I suspect these metrics are irrelevant to disproportionately more people reading this.
At any rate, for those of us who like to learn, I still found this valuable but by no means common knowledge
I'm not sure it's common knowledge, but it is general knowledge. Not all HNers are writing web apps. Many may be writing truly compute bound applications.
In my experience writing computer vision software, people really struggle with the common sense of how fast computers really are. Some knowledge like how many nanoseconds an add takes can be very illuminating to understand whether their algorithm's runtime makes any sense. That may push loose the bit of common sense that their algorithm is somehow wrong. Often I see people fail to put bounds on their expectations. Numbers like these help set those bounds.
You gauge with metrics and profiles, if necessary, and address as needed. You don’t scrutinize every line of code over whether it’s “reasonable” in advance instead of doing things that actually move the needle.
These are the metrics underneath it all. Profiles tell you what parts are slow relative to others and time your specific implementation. How long should it take to sum together a million integers?
But these performance numbers are meaningless without some sort of standard comparison case. So if you measure that e.g. some string operation takes 100ns, how do you compare against the numbers given here? Any difference could be due to PC, python version or your implementation. So you have to do proper benchmarking anyway.
People generally aren’t rolling their own matmuls or joins or whatever in production code. There are tons of tools like Numba, Jax, Triton, etc that you can use to write very fast code for new, novel, and unsolved problems. The idea that “if you need fast code, don’t write Python” has been totally obsolete for over a decade.
If you are writing performance sensitive code that is not covered by a popular Python library, don't do it unless you are a megacorp that can put a team to write and maintain a library.
It isn’t what you said. If you want, you can write your own matmul in Numba and it will be roughly as fast as similar C code. You shouldn’t, of course, for the same reason handrolling your own matmuls in C is stupid.
Many problems can performantly solved in pure Python, especially via the growing set of tools like the JIT libraries I cited. Even more will be solvable when things like free threaded Python land. It will be a minority of problems that can’t be, if it isn’t already.
reply