Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.

I dont know about gigawatts needed for future training, but this sentence about comparing prompts with plane trips looks wrong. Even making a prompt every second for 24h amounts only for 2.6 kg CO2 on some average Google LLM evaluated here [1]. Meanwhile typical flight emissions are 250 kg per passenger per hour [2]. So it must be parallelization to 100 or so agents prompting once a second to match this, which is quite a serious scale.

[1] https://cloud.google.com/blog/products/infrastructure/measur...

[2] https://www.carbonindependent.org/22.html





Lots of things to consider here, but mostly that is not the kind of prompt you would use for coding. Serious vibe coders will ingest an entire codebase into the model, and then use some system that automates iterating.

Basic "ask a question" prompts indeed probably do not cost all that much, but they are also not particularly relevant in any heavy professional use.


That it is reported that the global AI footprint is already at 8% of aviation footprint [1] is indeed rather alarming and surprising.

Research on this (is it mainly due to training? inefficient implementations? vibe coders as you say? other industrial applivations? can we verify this by the number of gpus made or money spent? etc) is truly necessary and the top companies must not be allowed to be not transparent about this.

[1] https://www.theguardian.com/technology/2025/dec/18/2025-ai-b...


The nature of these AIs is generally such that you can always throw more computation at the problem. Bigger models is obvious, but as I hinted earlier a lot of the current research goes more towards making various subqueries than making the models even bigger. In any case, for now the predominant factor determining how much compute a given prompt costs is how much compute someone decided to spend. So obviously if you pay for the "good" models there will be a lot more compute behind it than if you prompt a free model.

> Serious vibe coders will ingest an entire codebase into the model, and then use some system that automates iterating.

People who do that are <0.1% of those who use GenAI when coding. It doesn't create anything usable in production. "Ingesting an entire codebase" isn't even possible when going beyond absolute toy size, and even when it is, the context pollution generally worsens results on top of making the calls very slow and expensive.

If you're going talk about those people you should be comparing them with private jet trips (which of course are many orders of magnitude worse than even those "vibe coders")




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: