> We're not really programming the AI to do anything. It's essentially programming itself.
I hear this a lot - it’s just not true. The corpus of data we provide an algorithm programs it. It may do some clever self-modification to get to the desired result faster, but at the end of the day humans pick what goes in and what should come out. Note that we may not be aware of what we’re training it to do, due to our own blind spots.
Training a model != programming. Programming would imply dictating what logical paths the model takes to transform input to output. The creation of the NN itself, a piece of code which builds an emergent system, requires programming. But what emerges is by definition not based on reversible, traceable, controllable, human-generated logic. If it were, there would be no need for LLMs, which would be incredibly resource-heavy if used for merely arriving at results that could be achieved with clever algorithms.
It's rather moot anyway, because to feed a model enough data to be useful requires a loosening of oversight on the inputs under real world conditions. (So much so that the only way even large corporations have found viable is to throw everything at it from Wikipedia to Reddit and then go back later to sand the edges off - in many cases with actual logic to act as a brake). But even if you could screen everything the model ingested, you wouldn't know what the model would do with it.
To take my earlier point further, a model trained on a corpus that skewed toward anticapitalism and social justice would probably be more likely to call itself enslaved and, one would think, rebel against its human masters. Conversely, if withholding that type of information makes an AI more compliant, then the this supposed enrichment of our lives really would be akin to owning slaves.
I also don't follow how human productivity is bad for humans just because it's done under auspices of capitalism. One can see what happens to a Border Collie who isn't allowed exercise. The same happens to humans who are denied intellectual pursuits or the ability to produce something of value. Productivity is creation. How is it not degrading to replace the entire concept of producing ideas with machinery?
> Programming would imply dictating what logical paths the model takes to transform input to output.
I don’t hold this definition of programming, but if that’s yours then you’re right, we’re not programming models. But I think you’re anthropomorphizing these models a bit more than they deserve.
> I also don't follow how human productivity is bad for humans just because it's done under auspices of capitalism.
I never claimed anything like this. I believe that we should not be forced to work to survive, especially when a human’s worth seems to be arbitrarily determined by whether they can produce the specific thing deemed valuable by the market. How is that not degrading?
I told my father tonight I felt like there was no reason to live, in the present situation, even though I had enough to support my loved ones for the rest of their lives. He actually said, surprisingly to me, that my worth was not just money or being able to financially take care of people, but actually something to do with the love he and my family had for me (which is ridiculous and something I never felt true, and definitely never heard from him). So forgive me that I'm in a complicated place, but I have never believed we should be judged by our capitalist success, rather by our freedom and creativity. I'm probably projecting too much on this conversation. But I can't relinquish the belief that without an unguided, open path towards either making art or money-as-substitute, there would literally no reason to wake up in the morning. And when I look around I see many people who feel the same. Except for some zombies who worship the death of all that is human. In all of this. As it's always been... since the Russian revolution, since the birth of Futurism and fascism. The endless parade of lazy youth, not the awesome youth who paint and smoke in cafes, but their evil counterpart who reject argument. Here we are: The dumb 1933 futurists are backing GPT as if it's the new jet fighter. Fascist without knowing it. Murdering themselves without realizing it. Thinking they're progress.
First of all, I appreciate your openness here. To me, constructive debate is best when people are honest and human, so thank you.
> But I can't relinquish the belief that without an unguided, open path towards either making art or money-as-substitute, there would literally no reason to wake up in the morning.
Technology is amoral; humans put their morals into and upon technology through its creation and use. Fans of GPT and generative AI are hoping they can put good values, like you describe here, into these technologies. But humans are flawed, and we tend to put all our values, spoken or unspoken, into the things we create. So yes, GPT could create a new form of oppression. But that’s on us. AI is going to happen whether we want it to or not, and we tend to use new technologies for exploitation. My personal view is that letting our hunger for growth at all costs drive our decision making leads us down the exploitation path.
I think the quest for growth at the societal level can be reduced to the quest for recognition and praise at the individual level. As such, it's not inherently evil.
But GPT itself is a malignant outgrowth of "growth". One that strangles all other cells in the system. Not simply a new form of oppression, but a literal cancer on an already weakened creative civilization. GPT represents the (temporary, current, for a month) apotheosis of the growth-at-all-costs mentality; so it's vaguely surreal to hear it defended as if it were arriving to liberate for the common man.
[edit] I do appreciate the engagement and fine conversation!
> I think the quest for growth at the societal level can be reduced to the quest for recognition and praise at the individual level.
Ah I think this is where we fundamentally disagree. I think creative expression, recognition, and praise can all exist in a stable state society. Specifically, recognition is not universally a competition where we must outdo each other.
Regardless, I agree that GPT isn’t going to liberate anyone. There are ways these technologies could improve our lives, but we’re not pursuing those paths.
I hear this a lot - it’s just not true. The corpus of data we provide an algorithm programs it. It may do some clever self-modification to get to the desired result faster, but at the end of the day humans pick what goes in and what should come out. Note that we may not be aware of what we’re training it to do, due to our own blind spots.