Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A good moment to bring up Jevon's Paradox (https://en.wikipedia.org/wiki/Jevons_paradox)

"In economics, the Jevons paradox occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use induces increases in demand enough that resource use is increased, rather than reduced. Governments typically assume that efficiency gains will lower resource consumption, ignoring the possibility of the paradox arising"

As there's a huge crowd who seems very convinced of the idea that the solution to handle our resource problems is to be found in various growth and tech oriented solutions.



AI isn't the most efficient way to retrieve accurate information or to generate quality content, it's the most efficient way to retrieve plausible information and generate poor quality art and code. And that's really interesting.

There is a group of people that trust LLMs for all searches but most tech-literate people know they hallucinate, and understand roughly how an LLM works and that hallucinations are indistinguishable from the truth to the model. And so we still Google things and read books.

The same applies to AI-generated art, music, voiceovers, and code. They are technically impressive, but rarely meet client expectations in any industry. And so most artists and programmers still produce the work themselves.

AI is fantastic for low-quality content, and this explosion of AI is emblematic of how people often lack a filter for quality of information they put in their brains. This isn't new; YouTube has always had more low-quality content than high-quality pieces, probably 1000:1 if not a higher contrast. However, AI shows the scale like no tech has shown it to us before (and maybe this was known by some people at Google or Bing but not so publicly). Almost everyone consumes AI work — blogs, movie posters, art, music, and code. The market is massive but all AI does is reproduce the existing work it was loss-minimized on and the loss of quality is mathematically necessary unless the model is not a neural net, but a database of literally the entire dataset itself that is being searched.

So this is very interesting. When did we become so okay with feeding ourselves a fast food diet of content? Isn't that causing more harm than good? The scale of this phenomenon is truly (and literally) industrial. I myself can hardly imagine what the modern life would look like if people turned to the internet, let's say, only to look up facts and information they already seek. We are so far from that. And AI both illustrates it, and exacerbates it.


I see people frame generative AI this way a lot, as if they're strictly worse than alternatives in all dimensions (inaccurate, low quality). But I think there's several important dimension where they're strictly better, including flexibility, interactivity, and accessibility. I can ask an LLM a very specific question about my specific development environment, codebase, and tools and get a specific answer back that works. If it doesn't work, I can chat with the LLM about what happened to make changes. I don't even reach for LLMs that often compared to most engineers these days (maybe a couple times a month), but when I do, it's basically always the fastest and best way to deal with my particular situation.


I think a lot of people drastically underestimate how much the LLMs mislead the user with plausible-sounding answers. I would say that whatever % of queries you found them not to give you correct answers, the actual percentage is materially higher.

In my work niche, which at least 10,000 other software engineers do each day, ChatGPT 4 and 4o almost never give correct answers. Usually, they are misleading. I do find myself trying the LLMs when I'm faced with a challenging problem, but in that scenario they have not been helpful once.

Granted, there are areas of work that are much more popular and LLMs will be more helpful there. But these are individual scenarios, and we can find many where LLMs are fantastic and many where LLMs are awful if we wanted to cherry-pick.

Overall, the quality of the content is lower than what a human professional would do in their area of work, including flexibility, interactivity, and accessibility. This extends to books written by professionals and lectures given by them, as well as work carried out by them. It applies to home carpentry as much as it does to neurosurgery.

The more specific the knowledge has to be, the more this is true. The more generic, the less. But all-in-all, I think it's still very evident humans produce higher quality knowledge and content. And any quantized model of that content and knowledge will be unable to reproduce it at the same fidelity or quality.

With that said, I hope I expressed this enough — I do see your point in some circumstances. Just not overall.


It feels like you mostly view generative AI as a system that produces finished products, which you don’t like. And for some reason that is how they are marketed, but it’s a terrible use for them. To me the point of generative AI is as a tool for an expert user to use en route to achieving some larger goal. I can personally attest that it works really well for me in this regard, as can many others. I think if I tried to use LLMs as surrogate experts I would be very frustrated, since they are totally unready for that purpose.


This conversation has drifted slightly off-topic. It was originally about the efficiency of producing good quality outputs akin to products or services with generative AI.

I work in an industry adjacent to generative AI, so my views are very specific, and kind of beside the point. But I think the broader public does think what you describe. Much (and probably most) of AI-generated content that ends up on the internet is very barely changed by a human.

I am not saying gen AI is not useful, but that it's not efficient in the process of making high quality content. It's simply not steerable enough in practice. In creative industries, people are going pretty wild about how much work AI can replace, but all I've seen is mediocrity and failure when it is involved. Or frustration, as you say, that what it outputs is very difficult to turn into a high-quality product.


I agree. The state of the art of gen AI systems is well below a typical human expert in every domain. Here human expert is well below the bar of 'world class', more like 'some one with 2-5 years of professional experience in that area.' So, if you have access to someone like that, it's a no-brainer to use their work instead of gen AI system.

The interesting use case which has emerged is that there are a lot of times where it would be really helpful to me to have a short conversation with an expert on a topic adjacent to my own expertise. And it turns out that for those conversations, talking to ChatGPT is much better than talking to no one; it can help me with the kind of things someone would learn in the first few months on the job in that area, things a little too hard to google but where a human expert is not readily available.

I think this is the best, maybe the only, professional use case for GenAI right now -- advice and limited assistance in areas just outside your area of expertise, such that you don't need to depend directly on its output and can easily check/integrate the work.


I concur. I'm notca huge user of AI, but it does make a good search tool when my search term is verbose.

For example I was looking for the name of a windows API command. I "knew" the command must exist, but didn't gave a clue what it was called. Asked Cgpt for an example program, and there's the name of the API. (Which I can then Google for docs.)

I also had a complicated-to-ask question about sun movement which it explained to me, along with site links to actual data.

I'm not using it as a Google replacement, but more of a Google supplement when the question is long-winded to write.


The positive side is supercomputers help us analyze the climate, the Atlantic current, and so forth. The negative side is they also help locate oil. Is it a win or loss on net? Hard to know.

Thinking of entertainment, it probably IS a climate win if you spend an hour at home watching Netflix or chatting with GPT, as opposed to driving around town or jetting across the world. Supposedly a GPT-4 query costs 0.01kWh - meanwhile, a Tesla consumes 0.35kWh a minute at freeway speed.


I would be very interested in the breakdown of how they conclude it's 0.01kWh (10Wh) per request, and what that does and doesn't include.

I expect that if you were to calculate "incremental energy per request" - how much "extra CPU compute" each request adds, you could probably get to that sort of value.

But odds are good that figure ignored all the training data collection, all the processing on that, storage of that digested information, retrieval of it, etc, and that sort of number tends to also skip things like "storage systems running to have the information available."

If I've got an entire datacenter running to provide services, and the request consumes, say, 3 GPU-minutes of time across all the nodes, sure. This is a sane value. It just ignores a lot of the other resources dedicated to the task at various points.

Microsoft isn't using 30% more energy than their current footprint on 10Wh/request AI answers.

Math like this is very much a "Tell me what answer you'd like, and I'll make it work out!" sort of scenario. An increase in data center use by 30% is harder to fudge.


To put your follow upquestion differently,

'Is 10w/h per request the marginal cost of a request? Or does it factor in the fixed energy cost of the whole facility? Does it include training costs'

I'm inclined to lean towards it including at least some fixed costs. It seems rather high to be marginal cost. I have no gut feel for training costs though.

So, assuming it does include at least some fixed cost, using it more will reduce 'cost per use' while at the same time driving up actual consumption.


> it probably IS a climate win if you spend an hour at home watching Netflix or chatting with GPT, as opposed to driving around town or jetting across the world.

Nope.

Look back to the GP's paradox and the energy consumed watching even just the single most popular youtube video . . .

That isn't "energy saved" from "otherwise people would be flocking miles in cars to watch Despacito and Baby Shark Dance in theatres.

The AI training loads are over and above the already existing supercomputer modelling of land|sea|air fluid flows vie regular means - it's questionable whether LLM's et al even add anything of values in that domain (despite a plethora of papers asserting it to be so).


A good reminder to everyone that global CO2 emissions are higher than they have ever been [0]. Again: 2024 was the mos polluting year in history.

A lot of feel-good articles and tweets will often talk about the plummeting costs of solar, or how the share of renewables is higher than ever, or stuff like that, which is, from a global warming point of view, absolutely irrelevant.

The only thing that matters as far as warming is concerned is how much CO2 is there in the air. The cumulative number. It doesn't matter if the growth is slowing, it doesn't matter if there's more solar than ever, etc etc.

So yeah, we're producing more clean energy than ever, but we're also using more carbon than ever.

[0]: https://ourworldindata.org/grapher/annual-co2-emissions-per-...


The slope of new clean energy coming online is also important.

A company may be losing money today, but if they are growing revenue on a path to profitability, that is what ultimately matters.


No. You're right about the company case: when you look at a company, you only care about its profit today (and in the future). You don't really care about how much money it spent to get here.

But for purposes of global warming, you care about the total amount of carbon in the air. The cumulative sum. The integral.

If a company becomes profitable today, it survives.

If emissions were to magically stop today...warming continues!


I saw the point of things of the carbon tax as flattening a plateau more than stopping co2 emission growth. I never really had any delusions about that being politically viable, I can't even convince most around me that it's worth it to put a price on carbon that is several times less than its actual impact on the environment and economy.


Relieving some latent demand is a great thing, especially when you’re the one needing that last new hospital bed, traffic lane, or supercomputer time slot to predict who the hurricane endangers. More throughput is wonderful. When unsure, best to endorse meeting demand rather than judging it as unworthy.


It's either that or population control / reduction, so technology would be the better choice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: