It is nice to hear someone who is so influential just come out and say it. At my workplace, the expectation is that everyone will use AI in their daily software dev work. It's a difficult position for those of us who feel that using AI is immoral due to the large scale theft of the labor of many of our fellow developers, not to mention the many huge data centers being built and their need for electricity, pushing up prices for people who need to, ya know, heat their homes and eat
... not to mention that most of the time, what AI produces is unmitigated slop and factual mistakes, deliberately coated in dopamine-infusing brown-nosing. I refuse for my position, even profession, to be debased to AI slop reviewer.
I use AI sparingly, extremely distrustfully, and only as a (sometimes) more effective web search engine (it turns out that associating human-written documents with human-asked questions is an area where modeling human language well can make a difference).
(In no small part, Google has brought this tendency on themselves, by eviscerating Google Search.)
I truly don’t understand this tendency among tech workers.
We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.
I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
The problem is that its reached a tipping point. Comparing Excel to GenAI is just bad faith.
Are you not reading the writing on the wall? These things have been going on for a long time and final people are starting to wake up that it needs to stop. You cant treat people in inhumane ways without eventual backlash.
1. Many tech workers viewed the software they worked on in the past as useful in some way for society, and thus worth the many costs you outline. Many of them don't feel that LLMs deliver the same amount of utility, and so they feel it isn't worth the cost. Not to mention, previous technologies usually didn't involve training a robot on all of humanity's work without consent.
2. I'm not sure the premise that it's just another tool of the trade for one to learn is shared by others. One can alternatively view LLMs as automated factory lines are viewed in relation to manual laborers, not as Excel sheets were to paper tables. This is a different kind of relationship, one that suggests wide replacement rather than augmentation (with relatively stable hiring counts).
In particular, I think (2) is actually the stronger of the reasons tech workers react negatively. Whether it will ultimately be justified or not, if you believe you are being asked to effectively replace yourself, you shouldn't be happy about it. Artisanal craftsmen weren't typically the ones also building the automated factory lines that would come to replace them (at least to my knowledge).
I agree that no one really has the right to act morally superior in this context, but we should also acknowledge that the material circumstances, consequences, and effects are in fact different in this case. Flattening everything into an equivalence is just as intellectually sloppy as pretending everything is completely novel.
Copyright was an evil institution to protect corporate profits until people without any art background started being able to tap AI to generate their ideas.
Copyright did evolve to protect corporations. Most of the value from a piece of IP is extracted within first 5-10 years, why we have "author's life + a bunch of years" length on it?. Because it no longer is about making sure author can live off their IP, it's for corporations to be able to hire some artists for pennies (compared to value they produce for company) and leech off that for decades
So let us compare AI to aviation. Globally aviation accounts for approximately 830 million tons of CO₂ emission per year [1]. If you power your data centre with quality gas power plants you will emit 450g of CO₂ per kWh electricity consumed [2], that is 3.9 million tons per year for a GW data centre. So depending on power mix it will take somewhere around 200 GW of data centres for AI to "catch up" to aviation. I have a hard time finding any numbers on current consumption, but if you believe what the AI folks are saying we will get there soon enough [3].
As for what your individual prompts contribute, it is impossible to get good numbers, and it will obviously vary wildly between types of prompts, choice of model and number of prompts. But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.
Now, if this new tool allowed us to do amazing new things, there might be a reasonable argument that it is worth some CO₂. But when you are a programmer and management demands AI use so that you end up doing a worse job, while having worse job satisfaction, and spending extra resources, it is just a Kinder egg of bad.
> But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.
I dont know about gigawatts needed for future training, but this sentence about comparing prompts with plane trips looks wrong. Even making a prompt every second for 24h amounts only for 2.6 kg CO2 on some average Google LLM evaluated here [1]. Meanwhile typical flight emissions are 250 kg per passenger per hour [2]. So it must be parallelization to 100 or so agents prompting once a second to match this, which is quite a serious scale.
Lots of things to consider here, but mostly that is not the kind of prompt you would use for coding. Serious vibe coders will ingest an entire codebase into the model, and then use some system that automates iterating.
Basic "ask a question" prompts indeed probably do not cost all that much, but they are also not particularly relevant in any heavy professional use.
That it is reported that the global AI footprint is already at 8% of aviation footprint [1] is indeed rather alarming and surprising.
Research on this (is it mainly due to training? inefficient implementations? vibe coders as you say? other industrial applivations? can we verify this by the number of gpus made or money spent? etc) is truly necessary and the top companies must not be allowed to be not transparent about this.
The nature of these AIs is generally such that you can always throw more computation at the problem. Bigger models is obvious, but as I hinted earlier a lot of the current research goes more towards making various subqueries than making the models even bigger. In any case, for now the predominant factor determining how much compute a given prompt costs is how much compute someone decided to spend. So obviously if you pay for the "good" models there will be a lot more compute behind it than if you prompt a free model.
> Serious vibe coders will ingest an entire codebase into the model, and then use some system that automates iterating.
People who do that are <0.1% of those who use GenAI when coding. It doesn't create anything usable in production. "Ingesting an entire codebase" isn't even possible when going beyond absolute toy size, and even when it is, the context pollution generally worsens results on top of making the calls very slow and expensive.
If you're going talk about those people you should be comparing them with private jet trips (which of course are many orders of magnitude worse than even those "vibe coders")
When they stopped measuring compute in TFLOPS (or any deterministic compute metric) and started using Gigawatts instead, you know we're heading in the wrong direction.
> But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂.
I'm fairly certain that your math on this is orders of magnitude off unless you define "prompting all day" in a very non-standard way yet aren't doing so for plane trips, and that 99% of people who "prompt all day" don't even amount to 0.1 plane trip per year.
That’s interesting. Why do you think this is worth taking more seriously than Musks repeated projections for Mars colonies over the last decade? We were supposed to have one several times over by this point.
Because we know how much power it's actually going to take? Because OpenAI is buying enough fab capacity and silicon to spike the cost of RAM 3x in a month? Because my fucking power bill doubled in the last year?
Those are all real things happening. Not at all comparable to Muskan Vaporware.
I suspect people talk about natural resource usage because it sounds more neutral than what I think most people are truly upset about -- using technology to transfer more wealth to the elite while making workers irrelevant. It just sounds more noble to talk about the planet instead, but honestly I think talking about how bad this could be for most people is completely valid. I think the silver lining is that the LLM scaling skeptics appear to be correct -- hyperscaling these things is not going to usher in the (rather dystopian looking) future that some of these nutcases are begging for.
Let's be careful here. It's generally a good idea to congratulate people for changing their opinion based on evolving information, rather than lambast them.
(Not a tech worker, don't have a horse in this race)
> The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
Its similar to the Trust Thermocline. There's always been concern about whether we were doing more harm than good (there's a reason jokes about the Torment Nexus were so popular in tech). But recent changes have made things seem more dire and broken through the Harm Thermocline, or whatever you want to call it.
Edit: There's also a "Trust Thermocline" element at play here too. We tech workers were never under the illusion that the people running our companies were good people, but there was always some sort of nod to greater responsibility beyond the bottom line. Then Trump got elected and there was a mad dash to kiss the ring. And it was done with an air of "Whew, now we don't have to even pretend anymore!" See Zuckerberg on the right-wing media circuit. And those same CEOs started talking breathlessly about how soon they wouldn't have to pay us, because its super unfair that they have to give employees competitive wages. There are degrees of evil, and the tech CEOs just ripped the mask right off. And then we turn around and a lot of our coworkers are going "FUCK YEAH!" at this whole scenario. So yeah, while a lot of us had doubts before, we thought that maybe there was enough sense of responsibility to avoid the worse, but it turns out our profession really is excited for the Torment Nexus. The Trust Thermocline is broken.
Well said. AI makes people feel icky, that’s the actual problem. Everything else is post rationalisation they add because they already feel gross about it. Feeling icky about it isn’t necessarily invalid, but it’s important for us to understand why we actually like or dislike something so we can focus on any solutions.
> it’s important for us to understand why we actually like or dislike something
Yes!
The primary reason we hate AI with a passion is that the companies behind it intentionally keep blurring the (now) super-sharp boundary between language use and thinking (and feeling). They actively exploit the -- natural, evolved -- inability of most people on Earth to distinguish language use from thinking and feeling. For the first time in the history of the human race, "talks entirely like a human" does not mean at all that it's a human. And instead of disabusing users from this -- natural, evolved, understandable -- mistake, these fucking companies double down on the delusion -- because it's addictive for users, and profitable for the companies.
The reason people feel icky about AI is that it talks like a human, but it's not human. No more explanation or rationalization is needed.
> so we can focus on any solutions
Sure; let's force all these companies by law to tune their models to sound distinctly non-human. Also enact strict laws that all AI-assisted output be conspicuously labeled as such. Do you think that will happen?
I believe that’s the main reason why you dislike AI, but I believe if you asked everyone who hated AI many would come up with different main reasons why they dislike it.
I doubt that solution would work very well, even though it’s well intentioned. It’s too easy to work around it, especially with text. But at least it’s direct, as really my main point is we need to sidestep the emotional feelings we have about AI and actually present cold hard legal or moral arguments where they exist with specific changes requested or be dismissed as just hating it emotionally.
> They actively exploit the -- natural, evolved -- inability of most people on Earth to distinguish language use from thinking and feeling
Maybe this will force humans to raise their game, and start to exercise discrimination. Maybe education will change to emphasis this more. Ability to discern sense from pleasing rhetoric has always been a problem. Every politician and advertizer takes advantage of this. Reams of philosophy have been written on this problem.
> I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
You are right, thus downvoted, but still I see current outcry as positive.
I appreciate this and many of the other perspectives I’m encountering in the replies. I agree with you that the current outcry is probably positive, so I’m a little disappointed in how I framed my earlier comment. It was more contrarian than necessary.
We tech workers have mostly been villains for a long time, and foot stomping about AI does not absolve us of all of the decades of complicity in each new wave of bullshit.
It still feels like you haven’t absorbed their absolutely valid point that you may be hating first and coming up with rationalisations afterwards. There’s a more rational way to tackle this.
Do people really need to be more rational about this than AI itself?
Or has the bar been lowered in such a way that makes different people regard it as unsavory in different ways that wouldn't happen if everyone was more rational across-the-board?
The ick is human nature against the uncanny valley, some fear of change, and SOME actual valid points and concerns morally and legally. You’ll only not be dismissed as a Luddite if you focus on the last one only.
I don't feel it's immoral, I just don't want to use it.
I find it easier to write the code and not have to convince some AI to spit out a bunch of code that I'll then have to review anyway.
Plus, I'm in a position where programmers will use AI and then ask me to help them sort out why it didn't work. So I've decided I won't use it and I will not waste my time figuring why other people's AI slop doesn't work.
Copying isn’t theft, and it’s DEFINITELY not theft of labor.
Then again, you already knew this because we’ve been pointing it out to the RIAA and MPAA and the copyright cartels for decades now.
It is my personal opinion that attempts to reframe AI training as criminal are in bad faith, and come from the fact that AI haters have no legitimate basis of damages from which to have any say in the matter about AI training, which harms no one.
Now that it’s a convenient cudgel in the anti-AI ragefest, people have reverted to parroting the MPAA’s ideology from the 2000s. You wouldn’t download a training set!
I post some software on GitHub. You can use it in your software and tools and AI training set as well, as long as you follow my license. If you don't follow my license (let's say MIT, so you must provide a copy of the file called LICENSE.TXT with my name on it), you may not use it.
Now y'all finally know what it's like to be vegetarian (I'm not one). So many parallels. And they are expected to relatively keep quiet about it and not scream about things like
> Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society
Because screaming anything like that immediately gets them treated as social pariahs. Even though it applies even harder to modern industrialized meat consumption than to AI usage.
I've never met a person who says no vegetarian keeps quiet about it who is able to keep quiet about it, but I still got like 30 years left on Earth to meet one :)
Out of curiosity, is there anything in particular you don't like about people not wanting to eat meat?
Kind of it sounds like maybe you had an unpleasant interaction. Is that your main reason that "they" are obnoxious about not eating meat?
do you apply same standards when you say buy a phone?! never gonna buy iphone cause we know how and by whom they are made? never going to use any social media apps cause … well you see where this is going? you seem to be randomly putting a foot down on “issue du jour”…
Buying a phone is a non-dispensable part of life today. There are some government services in many countries which are digital only (and phone only in particular), and restaurants, hotels, etc in the service industry which all require you having a phone, otherwise you can't use their services. And this trend is growing. So if you are the type who wants to live in a cave, or hang yourself on a tree rather than accepting that modern societies require a modern phone, thats's your choice. But others rather accept this. We are beyond the point where this trend can be reversed. On the other hand AI is not that integral part of people's lives yet, and it's better to protest now as long as it has an impact
The terms “yank” and “seppo” were more common in older generations of Australians. If you could go back to the 1940s, I think you’d hear both terms a lot (in certain informal contexts)
One still occasionally hears “yanks”, but it is quite rare. “Seppos”, one more often hears joking about calling Americans that than anyone actually doing so-and the rare occasions the term is used (as opposed to merely mentioned), are (in my personal experience) self-conscious exercises in derogatory jocularity-related jocular coinages are “Sepponians” and “Seppostanis”
Of course, it is a big country, and terms which have fallen out of general use may be retained or revived in some pockets-I can only describe my own personal experiences
reply