Hacker Newsnew | past | comments | ask | show | jobs | submit | azakai's commentslogin

This is certainly possible: Break up large functions, and then the bounded maximum function size can be your O(1) chunk size, to be processed in a streaming manner without dynamic allocation.

However, breaking up huge functions (or skipping optimizations on them) will lead to missed opportunities. And LTO-style optimizations, where the entire program is taken into account, can be very important as well (as a concrete example, we see huge benefits from doing that in wasm-opt for Wasm GC).

Still, it's a nice idea, and maybe it can make 80% of compiler passes a lot faster!


Saying people shouldn't create open source code because AI will learn from it, is like saying people shouldn't create art because AI will learn from it.

In both cases I get the frustration - it feels horrible to see something you created be used in a way you think is harmful and wrong! - but the world would be a worse place without art or open source.


> In both cases I get the frustration - it feels horrible to see something you created be used in a way you think is harmful and wrong! - but the world would be a worse place without art or open source.

Well maybe the AI parasites should have thought of that.


The first chart in your link doesn't show "flat" usage until 2022? It is clearly rising at an increasing rate, and it more than doubles over 2014-2022.

It might help to look at global power usage, not just the US, see the first figure here:

https://arstechnica.com/ai/2024/06/is-generative-ai-really-g...

There isn't an inflection point around 2022: it has been rising quickly since 2010 or so.


I think you're referring to Figure ES-1 in that paper, but that's kind of a summary of different estimates.

Figure 1.1 is the chart I was referring to, which are the data points from the original sources that it uses.

Between 2010 and 2020, it shows a very slow linear growth. Yes, there is growth, but it's quite slow and mostly linear.

Then the slope increases sharply. And the estimates after that point follow the new, sharper growth.

Sorry, when I wrote my original comment I didn't have the paper in front of me, I linked it afterwards. But you can see that distinct change in rate at around 2020.


ES-1 is the most important figure, though? As you say, it is a summary, and the authors consider it their best estimate, hence they put it first, and in the executive summary.

Figure 1.1 does show a single source from 2018 (Shehabi et al) that estimates almost flat growth up to 2017, that's true, but the same graph shows other sources with overlap on the same time frame as well, and their estimates differ (though they don't span enough years to really tell one way or another).


I still wouldn't say that your assertion that data center energy use was fairly flat until 2022 is true. Even in Figure 1.2, for global data center usage, tracks more in line with the estimates in the executive summary. It just seems like the run-of-the-mill exponential increase with the same rate since at least 2014, a good amount of time before genAI was used heavily.

Basing off Yahoo historical price data, Bitcoin prices first started being tracked in late 2014. So my guess would be the increase from then to 2022 could have largely been attributed to crypto mining.

The energy impact of crypto is rather exaggerated. Most estimates on this front are aiming to demonstrate as a high value as possible, and so should be taken as higher upper bound, and yet even that upper bound is 'only' around 200TWh a year. Annual energy consumption is in the 24,000TWh range with growth averaging around 2% or so per year.

So if you looked at a graph of energy consumption, you wouldn't even notice crypto. In fact even LLM stuff will just look like a blip unless it scales up substantially more than its currently trending. We use vastly more more energy than most appreciate. And this is only electrical energy consumption. All energy consumption is something like 185,000 TWh. [1]

[1] - https://ourworldindata.org/energy-production-consumption


It looks like the number of internet users ~doubled in that time as well: https://data.worldbank.org/indicator/IT.NET.USER.ZS?end=2022...

Looks like those sizes could be improved significantly, as the builds include names etc. I would suggest linking with

emcc -O3

(and maybe even adding --closure 1 )

edit: actually the QuickJS playground looks already optimized - just the MicroQuickJS one could be improved.


Nice. Got it down from 229KB to 148KB! Thanks for the tips.

https://github.com/simonw/research/pull/5

Thats now live on https://tools.simonwillison.net/microquickjs


Thanks for sharing! The link to the PR looks like a wrong paste. I found https://github.com/simonw/tools/pull/181 which seems to be what was intended to be shared instead.

> features no one needed (GC)

WasmGC is absolutely necessary for languages like Dart, Kotlin, and Java, which are all using it right now successfully.

But I get that if you're compiling C or Rust then it might seem like it isn't useful.


Your general point stands - wasm's original goal was mainly sandboxing - but

1. Wasm does provide some amount of memory safety even to compiled C code. For example, the call stack is entirely protected. Also, indirect calls are type-checked, etc.

2. Wasm can provide memory safety if you compile to WasmGC. But, you can't really compile C to that, of course...


Correct me if I'm wrong, but with LLVM on Wasm, I think casting a function pointer to the wrong type will result in you calling some totally unrelated function of the correct type? That sounds like the opposite of safety to me.

I agree about the call stack, and don't know about GC.


That is incorrect about function pointers: The VM does check that you are calling the right function type, and it will trap if the type does not match.

Here it is in the spec:

> The call_indirect instruction calls a function indirectly through an operand indexing into a table that is denoted by a table index and must have type funcref. Since it may contain functions of heterogeneous type, the callee is dynamically checked against the function type indexed by the instruction’s second immediate, and the call is aborted with a trap if it does not match.

From https://www.w3.org/TR/wasm-core-2/#syntax-instr-control

(Other sandboxing approaches, including related ones like asm.js, do other things, some closer to what you mentioned. But wasm has strict checking here.)


Hmm, I wonder if I was confusing emscripten's asm.js approach with its approach to Wasm. Thank you!


Some data on how badly he torched his consumer base: a Yale study says Tesla lost 1.26 million US sales due to Musk's politics.

https://www.usatoday.com/story/cars/news/2025/10/28/tesla-lo...


0% slower means "the same speed." The same amount of seconds.

10% slower means "takes 10% longer." 10% more seconds.

So 45% slower than 2 seconds is 1.45 * 2 = 2.9 seconds.


The data here is interesting, but bear in mind it is from 2019, and a lot has improved since.


It might just be that we evolved it first. Someone has to (if anyone does).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: