No. His critique is one of process, not of output quality.
He is asserting the existence of an "unconscious human spirit", asserting that ChatGPT "is fast-tracking the commodification of the human spirit by mechanising the imagination", and that we should fight against AI as we would fight against genocide: "just as we would fight any existential evil, we should fight it tooth and nail, for we are fighting for the very soul of the world."
Incidentally, I agree with the importance of struggle and soul-work. I agree with the general valence of the lament, but come away with a very different call to action. In particular: I just don't think I have the right to impose luddism on the rest of the world for my particular niche while benefiting lavishly from the rest of the world's alienation form their much more essential labor.
To me, it's the ravings of an angry, arrogant, and entitled elitist who fears being dethroned from his comfortable luxury and refined status. And a hilariously hyperbolic one at that. Of all the actual evils I would write something like this about, AI Art probably isn't even in the top 1,000,000. Can you imagine looking at the world today and writing that last paragraph? My god.
---
Quotes:
> ChatGPT rejects any notions of creative struggle, that our endeavours animate and nurture our lives giving them depth and meaning. It rejects that there is a collective, essential and unconscious human spirit underpinning our existence, connecting us all through our mutual striving.
> ChatGPT is fast-tracking the commodification of the human spirit by mechanising the imagination. It renders our participation in the act of creation as valueless and unnecessary. That ‘songwriter ‘you were talking to, Leon, who is using ChatGPT to write ‘his’ lyrics because it is ‘faster and easier ,’is participating in this erosion of the world’s soul and the spirit of humanity itself and, to put it politely, should fucking desist if he wants to continue calling himself a songwriter.
> This impulse – the creative dance – that is now being so cynically undermined, must be defended at all costs, and just as we would fight any existential evil, we should fight it tooth and nail, for we are fighting for the very soul of the world.
I feel like this is a bit like the whole thing with "do submarines swim?"
I'm not interested in the question of whether an image I make with AI is art or not. I've written decent fiction in my time. Maybe here and there I inhabit the artistic category of "an unpublished genre fiction author." But I can't draw or paint or any of that stuff better than a reasonably talented 11 year old, and I'm 52. If I had artistic pretensions it's not happening.
But once in a while I want to see an image that I can't make and, well, AI boom boom yes here we go.
I don't claim that's art. I don't claim that the AI is an artist any more than I claim that a submarine is swimming. Those categories are irrelevant to my interest, which is simple.
I wish I was artistically talented so I could do stuff and now I can do some of that stuff without being artistically talented.
I feel good about this. It makes me feel better in the world.
Are we going to take this away from billions of people to protect people who can draw and have some training?
Nick Cave is expressing a personal loss, and I believe that he truly feels that loss. But to me, this letter reads roughly like: "if I were the server or the bouncer instead of the performer or the writer, all of humanity would cease to have meaning". Which is perhaps true, for Nick Cave. But it also betrays something grotesque and profoundly wrong about his view on the relationship between paid labor and the human soul.
It's a wonderful thing to find meaning in one's work, and for the things in which one finds meaning to be well-compensated. But it is no birthright. Contrary to Nick Cave's view, I can absolutely assure you that non-artists in HR departments and nursing stations and factory floors and classrooms often live full happy human inner lives. Those lives are of their own making and do not derive from the artiste class's output.
Manual production of high-quality clothes, tables, and glassware used to be the norm. Generations of people found meaning in these crafts before the industrial revolution changed the economics. People still do these things, only in rare cases as their primary way of making a living. Most art does not sustain developed world middle class existence. Most art is hobby. And that's okay.
The creation of software and AI systems is itself a form of craft-work and soul-work, which many engineers and scientists relate to the same way that Nick Cave relates to music. It is unclear to me why Nick Cave's striving is more important than the striving of engineers and scientists, or why his feeling of what humanity is, is more important than theirs.
Cave was expressing an answer to a question about cutting corners in the process of creating music. ( https://www.theredhandfiles.com/chatgpt-making-things-faster... ) There is certainly value in the work of nurses, bouncers and servers, and my interpretation of Cave's other written works leads me to believe that he is a proponent of finding joy and creative expression, even in tasks which don't have an obvious artistic product. AI lacks insight and lack of insight is what can turn a succulent feast of a life into biweekly deliveries of Soylent.
> AI lacks insight and lack of insight is what can turn a succulent feast of a life into biweekly deliveries of Soylent.
Does a picture of a humming bird lack insight? Does collage art lack insight? Do remixes lack insight? Does mass-produced formulaic pop music lack insight?
Maybe. Or maybe some artists enjoy those creative processes and some audiences enjoy the output. Maybe oil painters who critique photography, and photographers who critique collages, and musicians who critique mash-ups, and DJs who critique modern production studios, and, yes, artists who critique the use of AI models in creative processes, are all just being pretentious assholes.
(It is possible I am simply misunderstand Cave. I take most of his writing to be artistic prose. It's possible that these are sincere metaphysics and that Nick Cave does literally believe in some sort of ur-religious "essential and unconscious human spirit underpinning our existence". In which case I think he's got a nutty religion and consider the fact that AI is an existential threat to that religion mostly a net good for humanity.)
I might assert, with false nostalgia because I wasn't there, that we had a much better connection with what it meant to be human when we were tilling dirt and making clay pots and weaving cloth for each other, and now having been estranged from the physcial meaning-making all we have left is our image making, our personas we create for each other and these arguments we have online, and now we're automating that away too.
> we had a much better connection with what it meant to be human when we were tilling dirt and making clay pots and weaving cloth for each other
Actually, I agree. I think Nick Cave is right about this. I do think this sort of alienation has a cost.
But that doesn't mean that there is any remotely moral case for undoing the green revolution and allowing billions to starve. And it does not mean that the machines which feed those billions of people who might otherwise starve are somehow the root cause of a decline of humanity. In fact, quite the opposite.
And this is the paradox: our alienation from agricultural work is precisely what enables our very existence.
My main observation is that there is a way out of this paradox. As it turns out, you can go out grow some food in a garden, or write a song, or paint a picture, even if that work is commodified and there is no paycheck. The commodification and automation of those industries does not prevent one from engaging in them as soul-work.
The teacher who plays in a band in his garage is no different -- from a "soul of humanity" perspective -- than Nick Cave. But Nick Cave's implicit argument argument demands that he is different, and not from an economic perspective, but from a very soul of humanity perspective. It's extraordinarily off-putting to me in that sense.
Of course, engaging in art as hobby instead of for pay does require free time and a share of returns on our societal bargain. On that note: elites like Nick Cave should be spearheading serious conversations about political economics and labor economics, instead of lamenting the loss of their extraordinarily unique status.
> But that doesn't mean that there is any remotely moral case for undoing the green revolution and allowing billions to starve.
A question is, is it possible to advance technology to fulfill the green revolution without changing the value of human creativity due to the creation/advancement of genAI? Or past a certain point, the results of discovering improved health and ecological outcomes will become inextricably linked with discovering new technologies that cause conflict? What actually drives such a process?
I think more people might become interested on why we end up here talking about new possibilities conflicting with stability again and again, similar to how the negative effects of the invention of smartphones are being talked about now.
> A question is, is it possible to advance technology to fulfill the green revolution without changing the value of human creativity due to the creation/advancement of genAI?
I have to admit not quite sure what you mean, and I do admit full guilt in starting us down the path of "mixed analogies" :). I'll try my best, though.
> Or past a certain point, the results of discovering improved health and ecological outcomes will become inextricably linked with discovering new technologies that cause conflict? What actually drives such a process?
I do think with respect to life-sustaining things -- medicine, pharma, food, shelter, water, energy -- that a combination of specialization and automation is necessary to increase the collective standard of living, and that labor alienation stems from a combination of specialization and automation.
Where I struggle is coming up with an affirmative argument that an artist should benefit from automation of medicine or farming, but that an alienated lab tech or food factory worker should not benefit from automated art.
Another way to look at this is: the less you pay for art-as-entertainment, the more resources you have to buy free time to produce your soul-work (whatever that may mean to you).
> Where I struggle is coming up with an affirmative argument that an artist should benefit from automation of medicine or farming, but that an alienated lab tech or food factory worker should not benefit from automated art.
> Another way to look at this is: the less you pay for art-as-entertainment, the more resources you have to buy free time to produce your soul-work (whatever that may mean to you).
Ah, yes. The alienated workers of the world will warm their weary souls at the hearth of derivative algorithmic creativity units. The reduced price and efficient delivery of each drone's creativity units will obviously give them more free time.
Perhaps we can even come up with a pill that'll let the drones feel entertained without any content at all. If the side effects are well-tolerated, they can take it before work.
Many of us never see reality. Air-conditioned home to air-conditioned car to air-conditioned office. Artificial goals and artificial entertainments. All communication with members of your office and echo-chamber social media organs.
I suppose I don't really disagree with you, but a fair fraction of us would be likely to mourn, in fora like this, the end of the ability to sustain a middle-class existence via the craft of making software.
Considering all the open models that already exist and are yet to be created before all the rulings and appeals are done, that toothpaste ain't going back in the tube.
...I think you missed the point. OAI/MS can sue the author or at least cut off API access. If that happens, the fact that OAI is under fire from NYT doesn't somehow obviate the author's need to cover some massive legal bills for the foreseeable future.
The NYT case could take years. In the meantime OAI could choose to go after ToS violators.
The legal system can accommodate more than one unresolved court case at a time. We don't like put a semaphore on related cases or anything like that. (Or, sometimes we do, but guess who you need to hire for many many billable hours to make that happen in your case?).
So, the legal system can accommodate the NYT case against OAI and an OAI case against the author. The operative question is: can the author's pocketbook also accommodate?
(Or, more to the point, can the author accommodate losing access to gpt4? What happens when he wants to launch a new feature or pivot to a new product?)
Then they cut off the API Process and I just make a new account.
Who cares?
I doubt they would sue, because the Risk of Loosing would give a precedent. It's much easier and cheaper to scare people away from doing this and writing mean letters.
Those ToS would also probably be unenforceable in many countries outside the US beyond terminating an account
I agree on both points. Was just engaging with the legal aspect because that's what this thread was about. But now we've converged to the actual reason the author should probably care: https://news.ycombinator.com/item?id=39049622
If you never want an exit then probably doesn't matter.
You can't just hostile takeover the NYT. The Salzberger family, who have run it for generations, have a dual class structure and a pretty classic setup to keep control.
Or just... write 100 good prompt-repsonse pairs yourself.
2024 will be the year of synthetic data. 2025 will be the year of "you know you can use your own brain and type out 100 datapoints faster and cheaper than generating and filtering assloads of synthetic data, right?"
> We were initially skeptical whether we would get to 10,000 results. But with nightly leaderboard gamification, we managed to break 15,000 results within a week. Out of fear of eating into our productivity, we closed the contest.
I've hosted a few of these corporate data labeling events. If sufficiently gamified / there's a good enough UX, they can be surprisingly engaging. It helps a lot if you have a large employee base though. Distributing results over 5000 employees is exponentially easier than even 50 - in practicality, even larger than the orders of magnitude.
Yes and no, for text type stuff? Yes you're right. But I think in the vision space synthetic data will remain useful for a lot of things. I'm currently working on building a pipeline for personal projects to go from CAD models of environment to segmented training data. So far it looks almost as useful as real world data at a fraction of the cost of manual labeling.
The academic work is pretty safe as long as it isn't productized. The open models have a prime facie case to stand on. Using output is okay if you aren't directly competing with openai, even according to their tos.
> (e) use Output (as defined below) to develop any artificial intelligence models that compete with our products and services. However, you can use Output to (i) develop artificial intelligence models primarily intended to categorize, classify, or organize data (e.g., embeddings or classifiers), as long as such models are not distributed or made commercially available to third parties and (ii) fine tune models provided as part of our Services
But then those models are possibly used downstream, EG for Mistral's "medium" API model (and many other startups).
I guess if its behind an API and no one discloses the training data, OpenAI can't prove anything? Even obvious GPTisms could ostensibly be from internet data.
This is a flagrantly blatant violation of OpenAI's terms of use for businesses [1].
I have two issues with those terms:
1. I think that eventually US courts will determine one of two things: that OpenAI et al are guilty of massive infringement, or that these sorts of restrictive terms aren't enforceable. The need that these companies are trying to treat with terms on output seems unlikely to work out in the end. But we'll see.
2. Even if the terms are enforcable, the human review step in the tweet seems like it's make OpenAI's threading-the-needle position here even more fucking difficult to be taken seriously by any jury or judge.
However, enforcing the terms seems real damn hard in the case of small businesses... as long as you're not stupid enough to admit to violating them in a twitter thread, of course.
I think the author is probably safe from legal action for now because I don't think OpenAI is particularly eager to test the enforcability of their terms. And even if they are, doing so in this case is super high risk and super low reward. Still, I wouldn't test it by openly admitting to ToS violation like this. At the very least seems like a good way to get cut off from OAI APIs.
> 2. Restrictions [...] You will not, and will not permit End Users to: [...] use Output [...] to develop any artificial intelligence models that compete with our products and services.
Of course, you can simply ignore it, just like OpenAI is happy to ignore the terms of services on scraped websites and pirated ebooks and so on.
What are they going to do - claim your model is a derivative work of the training data?
> (e) use Output (as defined below) to develop any artificial intelligence models that compete with our products and services. However, you can use Output to (i) develop artificial intelligence models primarily intended to categorize, classify, or organize data (e.g., embeddings or classifiers), as long as such models are not distributed or made commercially available to third parties and (ii) fine tune models provided as part of our Services;
Depending on what kind of model they trained, they might be breaking these terms.
Right but the condition is for "models that compete with our products and service". Can you really argue that this niche app competes with OpenAI's products? Couldn't you make an argument that this only applies to products and services that directly compete with OpenAI, i.e. other LLM API's or a ChatGPT competitor such as a Claude or Bart?
The person who created it is using it as a direct replacement to paying OpenAI. They probably won’t consider pursuing this small individual, but if a big enough company did it, they’d probably have a problem with that.
A direct replacement is still different than a “competing product” which implies is sold to customers. His product (the app) doesn’t compete with OpenAI. I guess a lawyer would need to chime in
I read this in 2013 and remember enjoying the back-and-forth. Some reflections, a decade of life experience and a tech cycle later:
1. I'm with McKenzie on the coworkers aspect of the dialog. More separation from coworkers is better. In the Good Times (2013-2019,2021) it seems "right" to trade some comp for familiarity and good work vibes, and almost... inhuman... not to. But in the Bad Times you're reminded that an Excel formula could cost you not just your job but also a big chunk of your personal social network. Diversification is good.
2. I now realize what both sides of this are getting at is basically: "how to progress from Junior/Mid Engineer to something after that". There are many paths. The conclusion of the article is good, in that respect. Also: you can just stay a Mid/Senior Engineer. That's okay.
3. Call yourself whatever you want/need to stay employable. Be a good colleague/person. Work is work.
Or just leave academia. In the US at least, the job is like 80% government contracting and 20% teaching.
Teaching is great, so there's that. But literally every company will let your ad junct, and Professor of Practice usually pays more than 20% of a faculty salary. You can supervise PhD students as interns or by taking a courtesy affiliation (and often even have more impact on those students than their overworked and under-engaged advisors). And university classroom teaching in the US now looks a lot more like 90s/mid-naughts high school teaching.
Government contracting sucks, and the academic variety is not any better. I'd literally whether watch paint dry at a military base than contract for DARPA. NSF isn't actually that much better.
Who the fuck wants to be a combination high school teacher and federal government contractor? Saints or sociopaths, and there are a LOT more of the latter than the former in higher ed.
Honestly, is there a big difference anymore? The vast majority of papers I read are either by industry directly or have industry as a partner (as an author, not just acknowledgements). There are of course some, and even plenty of examples, but it does seem industry partners is almost necessary these days. I'm not convinced that level of interaction is healthy, for either parties.
Only a very small subset of industry cares about academic publishing, and even within that subset it's only a fraction of groups at a fraction of corps that consider publishing a primary or even secondary objective.
The groups that do care about those things can be good gigs, but are generally not the place in the company you want to be anyways, unless you can get in and out (for good) in <10 years. If you can do something that actually impacts the business -- that is actually useful to other humans -- no one gives a shit about h-indices or kaggle scores. And you'll be better compensated anyways.
You're measuring the wrong direction. Don't measure what percentage of industry publishes with academics. Instead, measure what percent of academics __in ML__ publish with industry. This direction because one is much larger than the other. Second, I mean... I am a researcher... and I'm talking about the environment I'm working in. It sounds like you're outside this environment trying to educate me on it. Am I misunderstanding here?
> can do something that actually impacts the business -- that is actually useful to other humans
Do not confuse these two. That's incredibly naive.
>
Honestly, is there a big difference anymore? The vast majority of papers I read are either by industry directly or have industry as a partner (as an author, not just acknowledgements).
Read more pure math papers, then you will see the difference. :-)
I thought we were talking ML here. I mean you're not wrong (I do do this) but context. But in ML, well... I mean even Max Welling is connected with Microsoft.
This is no contradiction: there exist quite pure math papers whose content is very relevant for the mathematics behand ML algorithms. :-)
I do have the impression that the kind of research in ML that is not strongly associated with the recent "machine-learning industrial complex" by now tends to become published in another subject area.
Sure, I agree with you. I just wouldn't refer to that work as pure math. And let's be real, most people are not working on the theoretical side of ML. Realistically people are anti theory in the ML space and it's really weird to me because it's a self fulfilling prophecy and the complaints are "it's not very good because not a lot of community effort hasn't been put in so let's not waste our time"
The problem is that AI is weird not because of academia. In fact, right not it has been captured by industry and it is why we've severely slowed down in progress[0]. Most people in the space now are working in industry labs. Frankly, you can do more, you get paid A LOT more (2-3x) and you have less bureaucratic bullshit. But I think you're keenly aware of this industry capture as you're mentioning aspects of it.
I don't want there to be any confusion: I think it is good that industry and academia work together. There's lots of benefits. But we also need to recognize that these two typically have very different goals, work at different TRLs, and have have very different expectations on the time where the work will be seen as impactful. Traditionally, academia has generally been the dominating player in the high risk high reward/low level research space (yes, much more goes on too, but of people that do this type of research, you think academia) while industry research typically is focused on higher TRL because they're focused on selling things in the near future. There's just a danger when you work too closely to industry: you can't have any wizards if you don't have any noobs.
But I'm not sure it is just ML that's been going this way. There's a lot of sentiment on this website where people dismiss research papers (outside ML) that show up here due to them not being viable products. I mean... yeah... they're research. We can agree that the value is oversold, but often that's by the publisher (read university) and not the paper (not sure if I can say the same for ML). But it's a kinda environmental problem because if everything has to be a product you can't be honest about what you did and if discussing the limits and where you need to still improve upon to actually get an product down the line gets you rejected, well... you just don't talk about that.
This is all the "RL hacking" or better known as Goodhart's Law. I've been saying we're living in Goodhart's Hell because it seems, especially in the last 5-10 years, we've recognized that a lot of metric hacking is going on and decided that the best course of action is not to resolve the issues, but lean into it. We've seen the house of cards that this has created. Crypto is a good example. Shame is if we kill AI because there is a lot of real value there. But if you're a chocolate factory and promise people that eating your chocolate will give them superpowers, it doesn't matter how life changingly delicious that chocolate is, people will be upset and feel cheated. Problem is, the whole chocolate industry is doing this right now and we're not Willy fucking Wonka.
[0] More progress looks like it is being made and there is a lot of progress that should have been made but wasn't but these types of nuances are a bit harder to discuss without intimate knowledge of the field. I'll say that diffusion should have happened much sooner but industry capture had everyone looking at GANs. Anything not, got extra scrutiny and became easy to reject due to not having state of the art results (are we doing research or are we building products?)
Only a relatively tiny sliver of PhDs doing top-tier ML research are in groups that care about publishing at corps the care about publishing in academic conferences.
He is asserting the existence of an "unconscious human spirit", asserting that ChatGPT "is fast-tracking the commodification of the human spirit by mechanising the imagination", and that we should fight against AI as we would fight against genocide: "just as we would fight any existential evil, we should fight it tooth and nail, for we are fighting for the very soul of the world."
Incidentally, I agree with the importance of struggle and soul-work. I agree with the general valence of the lament, but come away with a very different call to action. In particular: I just don't think I have the right to impose luddism on the rest of the world for my particular niche while benefiting lavishly from the rest of the world's alienation form their much more essential labor.
To me, it's the ravings of an angry, arrogant, and entitled elitist who fears being dethroned from his comfortable luxury and refined status. And a hilariously hyperbolic one at that. Of all the actual evils I would write something like this about, AI Art probably isn't even in the top 1,000,000. Can you imagine looking at the world today and writing that last paragraph? My god.
---
Quotes:
> ChatGPT rejects any notions of creative struggle, that our endeavours animate and nurture our lives giving them depth and meaning. It rejects that there is a collective, essential and unconscious human spirit underpinning our existence, connecting us all through our mutual striving.
> ChatGPT is fast-tracking the commodification of the human spirit by mechanising the imagination. It renders our participation in the act of creation as valueless and unnecessary. That ‘songwriter ‘you were talking to, Leon, who is using ChatGPT to write ‘his’ lyrics because it is ‘faster and easier ,’is participating in this erosion of the world’s soul and the spirit of humanity itself and, to put it politely, should fucking desist if he wants to continue calling himself a songwriter.
> This impulse – the creative dance – that is now being so cynically undermined, must be defended at all costs, and just as we would fight any existential evil, we should fight it tooth and nail, for we are fighting for the very soul of the world.