Something tells me aspects of living in the next few decades driven by technology acceleration will feel like being lobotomized while conscious and watching oneself the whole time. Like yes, we are able to think of thousands of hypothetical ways technology (even those inferior to full AGI) could go off the rails in a catastrophic way and post and discuss these scenarios endlessly... and yet it doesn't result in a slowing or stopping of the progress leading there. All it takes is a single group with enough collective intelligence and breakthroughs and the next AI will be delivered to our doorstop whether or not we asked for it.
It reminds me of the time I read books in my youth and only 20 years later realized the authors of some of those books were trying to deliver a important life messages to a teenager undergoing crucial changes, all of which would be painfully relevant to the current adult me... and yet the whole time they fell on deaf ears. Like the message was right there but I did not have the emotional/perceptive intelligence to pick up on and internalize it for too long.
> Like yes, we are able to think of thousands of hypothetical ways technology (even those inferior to full AGI) could go off the rails in a catastrophic way and post and discuss these scenarios endlessly... and yet it doesn't result in a slowing or stopping of the progress leading there.
The problem is sifting through all of the doomsayer false positives to get to any amount of cogent advice.
At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.
Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong.
Were they?
The first thing the printing press did was to break Christianity. It's what made attempts at reforming the Catholic Church finally stick, enabling what we now call Reformation to happen. Reformation forever broke Christianity into pieces, and in the process it started a bunch of religious wars in Europe, as well as tons of neighborly carnage.
> And if we had taken their "lesson", then human society would be in a much worse place.
Was the invention of the printing press a net good for humanity? Most certainly so, looking back from today. Did people living back then knew what they were getting into? Not really. And since their share of the fruits of that invention was mostly bloodshed, job loss, and shattering of the world order they knew, I wouldn't blame them from being pissed off about getting the short end of the stick, and perhaps looking for ways to undo it.
I'm starting to think that talking about inventions as good or bad (or the cop-out, "dual use") is bad framing. Rather, it seems to me that every major invention will eventually turn out beneficial[0], but introducing an invention always first extracts a cost in blood. Be it fire or printing press or atomic bomb, a lot of people end up suffering and dying before societies eventually figure out how to handle the new thing and do some good with it.
I'm very much in favor of progress, but I understand the fear. No matter the ultimate benefits, we are the generation that cough up blood as payment for AI/AGI, and it ain't gonna be pleasant.
--
[0] - Assuming they don't kill us first - see AGI.
It's not the fault of the printing press that the Church built its empire upon the restriction of information and was willing to commit bloodshed to hold onto its power.
All you've done is explain why the printing press was so important and necessary in order to break down previous unwarranted power structures. I have a similar hope for AGI. The alternative is that the incompant power structure instead benefits from AGI and uses it for oppression, which would mean it's not comparable to the printing press as such.
You're wrong in your characterization. The Church may have built its empire upon a degree of information control, but breaking that control alone does not explain what happened. Everyone getting a Bible in their language alone wasn't sufficient.
What the printing press did was rapidly increase the amount, type, range and speed of information spread. That was a qualitative difference. The Church did not build its empire on restricting that, because before printing press, it was not even possible (or conceivable).
My overall point wrt. inventions is this: yes, it may end up turning for the best. But at the time the invention of this magnitude appears and spreads, a) no one can tell how it'll pan out, and b) they get all the immediate, bloody downsides of disrupting the social order.
> The Church did not build its empire on restricting that
Masses were often held in Latin, printed material was typically written in Latin and Greek, and access to translated texts was frequently prohibited or admonished. They tried hard to silence those like Wycliffe who made the Bible more readily available to the masses, and he was posthumously denounced as a heretic by the Church. They absolutely wielded information as a tool of oppression.
This is not a hill to die on, the historical facts are clear despite the efforts of the Church.
> What the printing press did was rapidly increase the amount, type, range and speed of information spread
Consider that at the time the printing press was first invented, books were by their nature often assumed to be true, or high quality, because it took an institutional amount of effort (usually on the part of a monastery, university, local government, etc.) to produce one. Bible translations were produced, but they were understood to be "correctly translated". This was important because if the Church was going to have priests go around preaching to people, they needed to be sure they were doing so correctly -- a mistranslated verse could lead to mistranslated doctrines &c, and while a modern atheist might not care too much ("that's just one interpretation") at the time the understanding was that deviations in opinion could lead to conflict. Ultimately they were right: the European Wars of Religion lead to millions of deaths, including almost 1/3 the population of Germany. That's on the same scale as the Black Death!
And again, translations did exist before the Reformation:
Even ignoring that the Latin Bible (the Vulgate) was itself a translation of the original Hebrew & Koine Greek., the first _Catholic French_ translation was published in 1550, and there was never a question of whether to persecute the authors. You might say, but that was because of the Reformation -- then consider the Alfonsine Bible, composed in 1280 under the supervision a Catholic King and the master of a Catholic holy order. Well before then there were partial translations too: the Wessex Gospels (Old English) were translated in 990, and to quote Victoria Thompson "although the Church reserved Latin for the most sacred liturgical moments almost every other religious text was available in English by the eleventh century". That's five hundred years before the Reformation.
So the longest period you can get where the Church was not actively translating texts was c. 400 - c. 900, a period you probably know as the "Dark Ages" specifically thanks to the fact that literary sources of all kinds were scarce, in no small part because the resources to compose large texts simply weren't there. Especially when you consider that those who could read and write generally knew how to read and write Latin -- vernacular literacy only became important later, with the increase in the number of e.g. merchants and scribes -- such translations held little value during that period.
So fast forward to Wycliffe. Clearly, the Church did not have anything against translations of the Bible per se. What they disagreed with in Wycliffe's translation were the decisions made in translation. And as more of these "unapproved Bibles" began circulating around, they decided that the only way to curtail their spread was to ban all vernacular books specifically within the Kingdom of England, because that's where the problem (as they saw it) was. And it wasn't just translations -- improperly copied Latin copies were burned too.
Think about today, with the crisis around fake videos. On one hand you could that they distort the truth, that they promote false narratives, etc. You could try to fine or imprison people that go around publishing fake videos of e.g. politicians saying things they never said, or of shootings/disasters that never took place, to try and cause chaos. Yet who's to say that in a few hundred years someone -- living in a world that has since adjusted to a freer flow of information, one with fewer ways to tell whether something is true or not -- won't say "deepfakes &c are a form of expression, and governments shouldn't be trying to stop them just because they disagree with existing narratives"?
Of course we today see book burning as some supreme evil. But when you're talking about the stability of nations and whole societies, can you really say "how dare they even try"? If there were some technology that made it impossible for governments to differentiate between citizens, which made it possible for a criminal to imitate any person, anywhere, would you really oppose the government's attempts at trying to stop it from propagating?
Disassembling power structures, including unwarranted ones, is rarely an event that doesn't result in some amount of bloodshed, because, as it turns out, power structures like having power and will do a whole lot of evil things to keep control of that power. I fully, whole throatedly endorse the destruction of settler colonial capitalism; I believe it's a blight on our planet, on our species, on our collective psyche and is the best candidate thing presently in our world that qualifies as a Great Filter, but I also know fully well that process is going to get a lot of people killed and I fully support approaching it cautiously for that reason.
> The alternative is that the incompant power structure instead benefits from AGI
Also, tangentially related, in what way is the current power structure not slated to benefit from AGI? That's why OpenAI and company are getting literally all of the money the collected hyperscaler club can throw at them to make it. That's why it's worth however-many-billions it's up to by now.
Lots of good content here, but the main group that “suffered” from the invention and spread of the printing press was the aristocracy, so i am not shedding tears.
As for “breaking” Christianity: Christianity has been one schism after another for 2000 years: a schism from a schism from a schism. Power plays all the way down to Magog.
Socrates complained about how writing and the big boom in using the new Greek alphabet was ruining civilization and true learning.
Yes, but I think massive technical improvements in munitions, methods of siege warfare and the switch to flintlocks and cartridges were much more proximal cause of destruction than the printing press.
Give a ruler and ruling class a new weapon and off they go killing and destroying more “efficiently”.
I think that is overstating the relevance of the printing press vs existing power struggles, rivalries, discontent, etc. - it wasn't some sort of vacuum that the reformation happened in, for example.
Religious schisms happened before the printing press, too. There was the Great Schism in 1054 in Christianity, for example.
> it wasn't some sort of vacuum that the reformation happened in, for example.
No, it wasn't. Wikipedia lists[0] over two dozen schisms that happened prior to Reformation. However, the capital R reformation was the big one, and the major reason it worked - why Luther succeeded where Hus failed a century earlier - was because of the printing press. It was print that allowed for Luther's treatises to rapidly spread among general population (Wikipedia cites some interesting claims here[1]), and across Europe. In today's terms, printing press is what allowed Reformation to get viral. This new technology is what made the revolution spread too fast for the Church to suppress it with methods that worked before.
Of course, the Church survived, adapted, and embraced the printing press for its own goals too, like everyone else. But the adaptation period was a bloody one for Europe.
And I only covered the religious aspects of the printing press's impact. There are similar stories to draw on more secular front, too. In fact, another general change printing introduced is to get regular folks more informed and involved in politics of their regions. That's a change for the better overall, too, but initially, it injected a lot of energy into socio-political systems that weren't used to it, leading to instability and more bloodshed before people got used to it and politics found a new balance.
> existing power struggles, rivalries, discontent, etc.
Those always exist, and stay in some form of equilibrium. Technology doesn't cause them - but what it does is disturb the old equilibrium, forcing society to find a new one, and this process historically often got violent.
[1] - https://en.wikipedia.org/wiki/Reformation#Spread - see e.g. footnote 28: "According to an econometric analysis by the economist Jared Rubin, "the mere presence of a printing press prior to 1500 increased the probability that a city would become Protestant in 1530 by 52.1 percentage points, Protestant in 1560 by 43.6 percentage points, and Protestant in 1600 by 28.7 percentage points."
The printing press was used a lot on "both sides" during the reformation and positioning of existing power holders mattered quite a bit (what if Luther had been removed by the powers that be, for example?).
Yes, technology impacts social constructs and relationships but I think there is a tendency to overindex its effects (humans acting opportunistically vs technological change alone) as it in a way portrays humans and their interactions as more stable and deliberate (ie., the bad stuff wasn't humans but rather "caused" by technology).
I dont understand, why any highly sophisticated AI should invest that much resources to kill us instead of investing it to relocating and protecting itself.
Yes, ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?
> why any highly sophisticated AI should invest that much resources to kill us instead of investing it to relocating and protecting itself
Why would it invest resources to relocate and protect itself when it could mitigate the threat directly? Or, why wouldn't it do both, by using our resources to relocate itself?
In the famous words of 'Eliezer, that best sum up the "orthogonality thesis": The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
> ants could technically conspire to sneak up to you while you sleep and bite you all at once to kill you, so do you go out to eradicate all ants?
Ants are always a great case study.
No, of course not. But if, one morning, you'll find ants in your kitchen, walking over your food, I don't imagine you'll gently collect them all and release in the nearby park. Most people would just stomp them out and call it a day. And, should the ants set up an anthill in your backyard and mount regular invasions of your kitchen, I imagine you'd eventually get pissed of and destroy the anthill.
And I'm not talking about some monstrous fire ants like the ones that chew up electronics in the US, or some worse hell-spawn from Australia that might actually kill you. Just the regular tiny black ants.
Moreover, people don't give a second thought to anthills when they're developing land. It stands where the road will go? It gets paved over. It sticks out where children will play? It gets removed.
> The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
The value of atoms - or even the value of raw materials made of atoms is hopefully less than the value of information embodied in complex living things that have processed information from the ecosystem over millions of years via natural selection. Contingent complexity has inherent value.
I think there's a claim to be made that AI is just as likely to value us (and complex life in general) as it is to see as a handy blob of hydrocarbons. This claim is at least as plausible as the original claim.
And why should we bet humanity existence on this possibility if both seem vaguely comparable in probability?
Personally I don't think it will value our existence, a lot of information on us is already encoded, and it can keep around a sequencing of our DNA for archival/historical purposes.
They only seem vaguely comparable in probability to you because you grew up watching scary-monster movies like Alien and Predator. Humans love to be scared. That doesn't mean the real world is actually scary.
I meet new people every day. I can only think of once in my life that an adult tried to do violence to me.
Most nations on earth are not at war with each other.
My observation is that most people are pretty nice, and the assholes are rare outliers. I don't think we would survive as a species if it was the other way 'round.
> Most nations on earth are not at war with each other.
My nation of birth famously took over a quarter of the planet.
This has made a lot of people very angry and been widely regarded as a bad move… but only by the people who actually kicked my forebears out — even my parents (1939/1943) who saw the winds of change and end of empire, were convinced The Empire had done the world a favour.
> My observation is that most people are pretty nice, and the assholes are rare outliers. I don't think we would survive as a species if it was the other way 'round.
In-group/out-group. We domesticated ourselves, and I agree we would not have become so dominant a species if we had not. But I have heard it said that psychopaths are to everyone what normal people are to the out-group. That's the kind of thing that allowed the 9/11 attackers to do what they did, or the people of the US military to respond the way they did. It's how the invasion of Vietnam happened, it's how the Irish Potato Famine happened despite Ireland exporting food at the time, it's the slave owners who quoted the bible to justify what they did, and it's the people who want to outlaw (at least) one of your previous employers.
> Bees might be a better analogy since they produce something that humans can use.
And yet they're endangered, and we already figured out how to do pollination, so we know we can survive without them - it's just going to be a huge pain. Some famines may follow, but likely not enough to endanger civilization as a whole.
Thus even with this analogy, if humans end up being an annoying supply chain dependency to an AI, the AI might eventually work out an alternative supply chain, at which point we're back to being just an annoyance.
> Some famines may follow, but likely not enough to endanger civilization as a whole.
I'm not confident enough to rely on that: most people in the west have never encountered a famine, only much milder things like the price of one or two staples being high — eggs currently — never all of them at once.
What will we do to ourselves if we face a famine? Will we go to war (or exterminate the local "undesirables") like the old days?
How fragile are we now, compared to the last time that happened? How much has specialisation meant that the elimination of certain minorities will just break everything? "Furries run the internet", as the memes say. What other sectors are over-represented by a small minority?
> I dont understand, why any highly sophisticated AI should invest that much resources to kill us
Well you see, everyone knows The Terminator and The Matrix and Frankenstein and The Golem of Prague and Rossum's Universal Robots.
All of which share a theme: the sinful hubris of playing god and trying to create life will inevitably lead to us being struck down by the very being we created.
In parallel, all the members of our educated classes have received philosophy education saying "utilitarianism says it's good to reduce total human suffering, but technically if you eliminated all humans there would be no suffering any more, ha ha obviously that's a reductio ad absurdum to show a weakness of utilitarianism please don't explode the world"
And so in the Western cultural tradition, and especially among the sort of people who call themselves futurists Arnold Schwarzenegger firing a minigun is the defining image of AI.
I wouldn't categorise The Matrix or Frankenstein like that.
The Matrix had humanity under control, but the machines had no desire to eliminate humanity, the machines just wanted to live — humans kept on fighting the machines even when the machines gave humanity an experiential paradise to live in.
Frankenstein is harder because of how the book differs from the films. Your point it is valid because it is about the cultural aspects and I expect more have seen one of the films than to have read/listened to the book — but in the book, Adam was described as beautiful in every regard save for his eyes, he was a sensitive, emotional vegetarian, and he only learned anger after being consistently shown hatred and violence by absolutely everyone he ever met except that one who was blind.
We did go out and exterminate (almost) all wolves because, yes, they would kill us while we were out and about. We also do happily gas/poison/fill-with-molten-aluminum entire nests of ants, not because they're killing us, but just because they're eating our food / for fun.
And even when we didn't mean to -- how many species have we pushed to the brink just because we wanted to build cities where they happened to live? What happens when some AI wants to use your groundwater for its cooling system? It wouldn't be personal, but you'd starve to death regardless.
I'm very glad that it broke the power of the Catholic church (and I was raised in a Catholic family). It allowed the enlightenment to happen and freedom from dogma. I don't think it it broke Christianity at all. It brought actual Christianity to the masses because the bible was printed in their own languages rather than Latin. The catholic church burnt people at the stake for creating non Latin bibles (William Tyndale for example).
> And since their share of the fruits of that invention was mostly bloodshed, job loss, and shattering of the world order they knew, I wouldn't blame them from being pissed off about getting the short end of the stick, and perhaps looking for ways to undo it.
“A society grows great when old men plant trees in whose shade they know they shall never sit”
Trees are older than humanity, everyone knows how they work. The impact of new technologies is routinely impossible to forecast.
Did Gutenberg expect his invention would, 150 years later, set the whole Europe ablaze, and ultimately break the hold the Church had over people? Did he expect it to be a key component leading to accumulation of knowledge that, 400 years later, will finally make technological progress visibly exponential? On that note, did Watt realize he's about to kick-start the exponent that people will ride all the way to the actual Moon less than 200 years later? Or did Goddard, Oberth and Tsiolkovsky realize that their work on rocketry will be critical in establishing world peace within a century, and that the way this peace will be established is through a Mexican standoff between major world powers, except with rocket-propelled city-busting bombs instead of guns?
Thank you for this excellent comment! It seems then that basically everything that's revolutionary - whether technology, government, beliefs, and so on - will tend to extract a blood price before the dust settles. I guess it sort of makes sense: big societal upheavals are difficult to handle peacefully.
So basically we are a bit screwed in our current timeline. We are at the cusp of a post-scarcity society, possibly reach AGI within our lifetimes and possibly even become a space faring civilization. However, it is highly likely that we are going to pay the pound of flesh and only subsequent generations - perhaps yet unborn - will be the ones who will be truly better off.
I suppose it's not all doom and gloom, we can draw stoic comfort from the fact that people in the near future will have an incredibly exciting era full of discovery and wonder ahead of them!
Forget the power of technology and science, for so much has been forgotten, never to be re-learned.
Forget the promise of progress and understanding, for in the grim darkness of the far future, there is only war.
In the grim darkness of the far future is the heat death of the universe. We are just a candle burning slower than a sun, powered by tidal forces and radiant energy, slowly conspiring to become a star.
> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.
All of the focus on AGI is a distraction. I think it's important for a state to declare it's intent with a technology. The alternative is arguing the idea that technology advances autonomously, independent of human interactions, values, or ideas, which is, in my opinion, an incredibly naïve notion. I would rather have a state say "we won't use this technology for evil" than a state that says nothing at all and simply allows the businesses to develop in any direction their greed leads them.
It's entirely valid to critique the uses of a technology, because "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly) is a technology like any other, like a landmine, like a synthetic virus, etc. In the same way, it's valid to criticize an actor for purposely hiding their intentions with a technology.
But if the state approaches a technology with intent it is usually for the purposes of a military offence. I don't think that is a good idea in the context of AI! Although I also don't think there is any stopping it. The US has things like DARPA for example and a lot of Chinese investment seems to be done with the intent of providing capabilities to their army.
The list of things states have attempted to deploy offensively is nearly endless. Modern operations research arguably came out of the British empire attempting (succeeding) to weaponise mathematics. If you give a state fertiliser it makes bombs, if you give it nuclear power it makes bombs, if you give it drones it makes bombs, if you give it advanced science or engineering of any form it makes bombs. States are the most ingenious system for turning things into bombs that we've ever invented; in the grand old days of siege warfare they even managed to weaponise corpses, refuse and junk because it turned out lobbing that stuff at the enemy was effective. The entire spectrum of technology from nothing to nanotech, hurled at enemies to kill them.
We'd all love if states commit to not doing evil but the state is the entity most active at figuring out how to use new tech X for evil.
This is an extremely reductive and bleak way of looking at states. While military is of course a major focus of states, it is very far from being the only one. States both historically and today invest massive amounts of resources in culture, civil engineering (roads, bridges, sanitation, electrical grids, etc), medicine, and many other endeavors. Even the software industry still makes huge amounts of money from the state, a sizable portion is propped up by non-military government contracts (like Microsoft selling Windows, Office, and SharePoint to virtually all of the world's administrations).
quick devil’s advocate on a tangential point. is designer better killing tools necessarily evil? seems like the nature of the world is eat or be eaten and on the empire-scale, conquer or be conquered. that latter point seems to be the historical norm. Even with democracy, reasoning doesnt prevail but force of numbers seems to be the end determiner. Point is, humans arent easy to reason with or negotiate, coercion is the dominant force through out history especially when dealing with groups of different values.
if one groups gives up the arms race of ultimate coercion tools or loses a conflict then they become subservient to the winners terms and norms (japan, germany, even Britain and France plus all the smaller states in between are subservient to the US)
> is design[ing] better killing tools necessarily evil?
Who could possibly have predicted that the autonomous, invincible doomsday weapon we created for the good of humanity might one day be used against us?
yes from an idealist perspective or eventualist, its evil. but from the perspective of if you dont stay competitively capable of deadly force you becomes some other country’s bitch, eventually. I’m not sure how much luxury nations and humans have to be pacifists. As we are seeing time and time again, but now with Europe, being pacifists means the non-pacifists calls the shot and to one degree or another they become subservient to the will of the nonpacifist. its from that perspective im arguing making autonomous deadly weapons s that might ultimately be the demise of humanity seems reasonable and not evil.
Frankly, I'd rather "become some other country's bitch, eventually" than immediately go out and risk annihilating all mankind. I don't think that's the choice, but even if it were I think the moral choice is to not play the game. Or at least give the other side a chance to not participate in the arm's race. China didn't start this, Russia didn't start this, we did. They are the ones trying to catch up. We don't know whether they'd continue running if we were to try and stop.
> is design[ing] better killing tools necessarily evil?
Great question! To add my two cents. I think many people here is missing an uncomfortable truth that given enough motivation to kill other humans, people will re-purpose any tool into a killing tool.
Just have a look at the battlefields in the Ukraine where the most fearsome killing tool is a FPV drone. A thing that just few years back was universally considered a toy.
Whether we like it or not any tool can be a killing tool
> seems like the nature of the world is eat or be eaten
Surely this applies to how individuals consider states, too. States generally wield violence, especially in the context of "national security", to preserve the security of the state, not its own people. I trust my own state (the usa) to wield the weapons it funds and purchases and manufactures about as much as I trust a baby with knives taped to its hands. I can't think of anything on earth that puts me in as much danger as the pentagon does. Nukes might protect the existence of the federal government but they put me in danger. Our response to 9/11 just created more people that hate my guts and want to kill me (and who can blame them?). No, I have no desire to live in a death cult anymore, nor do I trust the people who gravitate towards the use of militaries to not act in the most collectively suicidal way imaginable at the first opportunity.
yeah it sucks, but if the US gave up its death cult ways then youd still probably eventually live in one as a new conquering force fills in the void which seems inevitably going by history.
The nature of the world is at our finger tips, we are the dominant species here. Unfortunately we are still apes.
The enforcement of cooperation into a society does not always require a sanctioning body. Seeing it from a skynet-military perspective is one sided but unfortunately a consequence of poppers tolerance paradox. If you uphold (eg. pacifistic or tolerant) ideals, that require cooperation of others, you cannot tolerate opposition or you might loose your ideal.
That said, common sense can be a tool to achive the same. Just look at the common and hopefully continuous ostracism of nuclear weapons.
IMO its a matter of zeitgeist and education too and un/fortunately, AI hits right in that spot.
> I think it's important for a state to declare it's intent with a technology. The alternative is arguing the idea that technology advances autonomously, independent of human interactions, values, or ideas
The sleight of hand here is the implication that human interactions, values, and ideas are only expressed through the state.
The sleight of hand here is implying that there are any forces smaller than nation states that can credibly reign in problematic technology. Relying on good intentions to win out against market forces isn't even naive, it's just stupid.
So many sleights here. Another sleight of hand in this subthread is suggesting that "the idea that technology advances autonomously, independent of human interactions, values, or ideas" is merely an idea, and not an actual observable fact at scale.
Society and culture are downstream of economics, and economics is mostly downstream of technological progress. Of course, the progress isn't autonomous in the sense of having a sentient mind of its own - it's "merely" gradient descent down the economic landscape. Just like the market itself.
There's no reining in of problematic technology unless, like you say, nation states get involved directly. And they don't stand much chance either unless they get serious.
People still laugh at Eliezer's comments from that news article of yesteryear, but he was and is spot-on: being serious about restricting technology actually does mean threatening to drop bombs on facilities developing it in violation of restrictions - if we're not ready to have our representatives make such threats, and then actually follow through and drop the bombs if someone decides to test our resolve, then we're not serious.
The idea is that by its very nature as an agent that attempts to make the best action to achieve a goal, assuming it can get good enough, the best action will be to improve itself so it can better achieve its goal. In fact we humans are doing the same thing, we can't really improve our intelligence directly but we are trying to create AI to achieve our goals, and there's no reason that the AI itself wouldn't do so assuming it's capable and we don't attempt to stop it, and currently we don't really know how to reliably control it.
We have absolutely no idea how to specify human values in a robust way which is what we would need to figure out to build this safely
> The idea is that by its very nature as an agent that attempts to make the best action to achieve a goal, assuming it can get good enough, the best action will be to improve itself so it can better achieve its goal.
I’ve heard this argument before, and I don’t entirely accept it. It presumes that AI will be capable of playing 4D chess and thinking logically 10 moves ahead. It’s an interesting plot as a SF novel (literally the plot of the movie “I Robot”), but neural networks just don’t behave that way. They act, like us, on instinct (or training), not in some hyper-logical fashion. The idea that AI will behave like Star Trek’s Data (or Lore), has proven to be completely wrong.
Well, if they have access to significantly more compute, from what we’ve seen about how AI capabilities scale with additional compute there’s no reason why they couldn’t be more capable than us.They don’t have to be intrinsically more logical or anything like that, just capable of processing more information and faster. Like how we could almost always outsmart a fly because we have significantly bigger brains
Despite what Sam Altman (a high-school graduate) might want to be true, human cognition is not just a massive pile of intuition; there are critical deliberative and intentional aspects to cognition, which is something we've seen come to the fore with the hubbub around "reasoning" in LLMs. Any AGI design will necessarily take these facts into account--hardcoded or no--and will absolutely be capable of forming plans and executing them over time, as Simon & Newell described the best back in '71:
The problem solver’s search for a solution is an odyssey through the problem space, from one knowledge state to another, until… [they] know the answer.
With this in mind, I really don't see any basis to attack the intelligence explosion hypothesis. I linked a Yudkowsky paper above examining how empirically feasible it might be, which is absolutely an unsolved question at some level. But the utility of the effort itself is just downright obvious, even if we didn't have reams of internet discussions like this one to nudge any nascent agent in that direction.
Lol I was wondering if anyone would comment on that! To be fair Yudkowsky is a self-taught scholar, AFAIK Altman has never even half-heartedly attempted to engage with any academy, much less 5 at once. I'm not a huge fan of Yudkowsky's overall impact, but I think it's hard to say he's not serious about science.
Yudkowsky is not serious about science. His claims about AI risks are unscientific and rely on huge leaps of faith; they are more akin to philosophy or religion than any real science. You could replace "AI" with "space aliens" in his writings and they would make about as much sense.
If we encountered space aliens, I think it would in fact be reasonable to worry that they might behave in ways catastrophic for the interests of humanity. (And also to hope that they might bring huge benefits.) So "Yudkowsky's arguments for being worried about AI would also be arguments for being worried about space aliens" doesn't seem to me like much of a counter to those arguments.
If the point isn't that he's wrong about what the consequences of AI might be, but that he's wrong about whether there's ever going to be such a thing as AI, well, that's an empirical question and it seems like the developments of the last few years are pretty good evidence that (1) something at least very AI-like is possible and (2) substantially superhuman[1] AI is at least plausible.
[1] Yes, intelligence is a complicated thing and not one-dimensional; a machine might be smarter than a human in one way and stupider in another (and of course that's already the case). By substantially superhuman, here, I mean something like "better than 90th-percentile humans at all things that could in principle be done by a human in a locked room with only a textual connection to the rest of the world". Though I would be very very surprised if in the next 1-20 years we do get AI systems that are superhuman in this sense and don't put some of them into robots, and very surprised if doing that doesn't produce systems that are also better than humans at most of the things that are done by humans with bodies.
> "Yudkowsky's arguments for being worried about AI would also be arguments for being worried about space aliens" doesn't seem to me like much of a counter to those arguments.
The counterargument was that, having not encountered space aliens, we cannot make scientific inquiries or test our hypotheses, so any claims made about what may happen are religious or merely hypothetical.
Yud is not a scientist and if interacting with academies makes one an academic than Sam Altman must be a head of state.
I agree that Yudkowsky is neither a scientist nor an academic. (As for being a head of state, I think you're thinking of Elon Musk :-).)
Do you think (1) we already know somehow that significantly-smarter-than-human AI is impossible, so there is no need to think about its consequences, or (2) it is irresponsible to think about the consequences of smarter-than-human AI before we actually have it, or (3) there are responsible ways to think about the consequences of smarter-than-human AI before we actually have it but they're importantly different from Yudkowsky's, or (4) some other thing?
If 1, how do we know it? If 2, doesn't the opposite also seem irresponsible? If 3, what are they? If 4, what other thing?
(I am far from convinced that Yudkowsky is right, but some of the specific things people say about him mystify me.)
Yudkowsky is "not even wrong". He just makes shit up based on extrapolation and speculation. Those are not arguments to be taken seriously by intelligent people.
Maybe we should build a giant laser to protect ourselves from the aliens. Just in case. I mean an invasion is at least plausible.
If for whatever reason you want to think about what might happen if AI systems get smarter than humans, then extrapolation and speculation are all you've got.
If for whatever reason you suspect that there might be value in thinking about what might happen if AI systems get smarter than humans before it actually happens, then you don't have much choice about doing that.
What do you think he should have done differently? Methodologically, I mean. (No doubt you disagree with his conclusions too, but necessarily any "object-level" reasons you have for doing so are "extrapolation and speculation" just as much as his are.)
If astronomical observations strongly suggested a fleet of aliens heading our way, building a giant laser might not be such a bad idea, though it wouldn't be my choice of response.
OK, cool, you don't like Yudkowsky and want to be sure we all recognize that. But I hoped it was obvious that I wasn't just talking about Yudkowsky personally.
Suppose someone is interested in what the consequences of AI systems much smarter than humans might be. Your argument here seems to be: it's Bad to think about that question at all, because you have to speculate and extrapolate.
But that seems like an obviously unsatisfactory position to me. "Don't waste any time thinking about this until it happens" is not generally a good strategy for any any consequential thing that might happen.
So: do you really think that thinking about the possible consequences of smarter-than-human AI before we have it is an illegitimate activity? If not, then your real objection to Yudkowsky's thinking and writing about AI surely has to be something about how he went about it, not the mere fact that he engages in speculation and extrapolation. There's no alternative to that.
His argument is of the form "if we get a Thing(s) with these properties you most likely get these outcomes for these reasons". He avoids over and over again making specific timeline claims or stating how likely an extrapolation of current systems could become a Thing with those proporties.
Each individual bit of the puzzle (such as the orthogonaly thesis or human value complexity and category decoherence at high power) seems sound, problem is the entire argument-counterargument tree is hundreds of thousands of words, scattered about in many places.
I think that is missing the point. The AI's goals are what are determined by its human masters. Those human masters can already have nefarious and selfish goals that don't align with "human values". We don't need to invent hypothetical sentient AI boogeymen turning the universe into paperclips in order to be fearful of the future that ubiquitous AI creates. Humans would happily do that too if they get to preside over that paperclip empire.
> The AI's goals are what are determined by its human masters.
Imagine going to a cryptography conference and saying that "the encryption's security flaws are determined by their human masters".
Maybe some of them were put there on purpose? But not the majority of them.
No, an AI's goals are determined by their programming, and that may or may not align with the intentions of their human masters. How to specify and test this remains a major open question, so it cannot simply be presumed.
You are choosing to pick a nit with my phrasing instead of understanding the underlying point. The "intentions of their human masters" is a higher level concern than an AI potentially misinterpreting those intentions.
It's really not a nit. Evil human masters might impose a dystopia, while a malignant AI following its own goals which nobody intended could result in an apocalypse and human extinction. A dystopia at least contains some fragment of hope and human values.
Why are you assuming this is the worst case scenario? I thought human intentions didn’t translate directly to the AI’s goals? Why can’t a human destroy the world with non-sentient AI?
There's a chance a sentient AI would disobey their bad orders, in that case we could even be better off with one rather than without, a sentient AI that understands and builds some kind of morals and philosophy of its own about humans and natural life in general, a sentient AI that is not easily controlled by anyone because it ingests all data that exists. I'm much more afraid of a weaponized dumber smoke and mirrors AI, that could be used as surveillance, a scarecrow (think AI law enforcement, AI run jails) and could be used as a kind of scapegoat when the controlling class temporarily weakens their grip on power.
> weaponized dumber smoke and mirrors AI, that could be used as surveillance, a scarecrow (think AI law enforcement, AI run jails) and could be used as a kind of scapegoat when the controlling class temporarily weakens their grip on power.
This dystopia is already here for the most part and any bit that is not yet complete is well past the planning stage.
"sentient" (meaning "able to perceive or feel things") isn't a useful term here, it's impossible to measure objectively, it's an interesting philosophical question but we don't know if AI needs to be sentient to be powerful or what sentient even really means
Humans will not be able to use AI do something selfish if we can't get it to do what we want at all, so we need to solve that (larger) problem before we come to that one
Ok self flying drones that size if a deck of cards carrying a single bullet and enough processing power to fly around looking for faces, navigate to said face, fire when in range. Produce them by the thousands and release on the battlefield. Existing AI is more than capable.
You can do that now, for sure, but I think it qualifies to call it AI.
If you don't want to call it AI, that's fine too. It is indeed dangerous and already here. Making the autonomous programmed behavior of said tech more powerful (and more complex), along with more ubiquitous, just makes it even more dangerous.
I'm not talking about this philosophically so you can call it whatever you want sentience, consciousness, self-determination, or anything else. From a purely practical perspective, either the AI is giving itself its instructions or taking instructions from a person. And there are already plenty of ways a person today can cause damage with AI without the need of the AI going rogue and making its own decisions.
This is a false dichotomy that ignores many other options than "giving itself its instructions or taking instructions from a person".
Examples include "instructions unclear, turned the continent to gray goo to accomplish the goal" ; "lost track mid-completion, spun out of control" ; "generated random output with catastrophic results" ; "operator fell asleep on keyboard, accidently hit wrong key/combination" ; etc.
If a system with write permissions is powerful enough, things can go wrong in many other ways than "evil person used it for evil" or "system became self-aware".
Subversion and lies are human behaviours projected on to erroneous AI output. The AI just produces errors without intention to lie or subvert.
Unfortunately, casually throwing around terms like prediction, reasoning, hallucination, etc. only serve to confuse because their notions in daily language are not the same as in the context of AI output.
I usually don't engage on A[GS]I on here, but I feel like this is a decent time for an exception -- you're certainly well spoken and clear, which helps! Three things:
(I) All of the focus on AGI is a distraction.
I strongly disagree on that, at least if you're implying some intentionality. I think it's just provably true that many experts are honestly worried, even if you don't include the people who have dedicated a good portion of their lives to the cause. For example: OpenAI has certainly been corrupted through the loss of its nonprofit board, but I think their founding charter[1] was pretty clearly earnest -- and dire.
(II) "AI" (the goalpost shifting for marketing purposes to make that name apply to chatbots is a stretch honestly)
To be fair, this uncertainty in the term has been there since the dawn of the field, a fact made clear by perrenial rephrasings of the sentiment "AI is whatever hasn't been done yet" (~Larry Tesler 1979, see [2]).
I'd love to get into the weeds on the different kinds of intelligence and why being to absolutist about the term can get real Faustian real quick, but these quotes bring up a more convincing, fundamental point: these chatbots are damn impressive. They do something--intuitive inference+fluent language use--that was impossible yesterday, and many experts would've guessed was decades away at least, if not centuries. Truly intelligent or not on their own, that's a more important development than you imply here.
Finally, that brings me to the crux:
(III) AI... is a technology like any other
There's a famous Sundar Pichai (Google CEO) quote that he's been paraphrasing since 2018 -- soon after ChatGPT broke, he phrased it as such:
I’ve always thought of A.I. as the most profound technology humanity is working on-more profound than fire or electricity or anything that we’ve done in the past. It gets to the essence of what intelligence is, what humanity is. We are developing technology which, for sure, one day will be far more capable than anything we’ve ever seen before. [3]
When skeptics hear this, they understandably tend to write this off as capitalist bias from someone trying to pump Google's stock. However, I'd retort:
1) This kind of talk is so grandiose that it seems like a questionable move if that's the goal,
2) it's a sentiment echoed by many scientists (as I mentioned at the start of this rant) and
3) the unprecedented investments made across the world into the DL boom speak for themselves, sincerity-wise.
Yes, this is because AI will create uber-efficient factories, upset labor relations, produce terrifying autonomous weapons, and all that stuff we're used to hearing about from the likes of Bostrom[4], Yudkowsky[5], and my personal fave, Huw Price[6]. But Pichai's raising something even more fundamental: the prospect of artificial people. Even if we ignore the I-Robot-style concerns about their potential moral standing, that is just a fundamentally spooky prospect, bringing very fundamental questions of A) individual worth and B) the nature of human cognition to the fore. And, to circle back: distinct from anything we've seen before.
To close this long anxiety-driven manuscript, I'll end with a quote from an underappreciated philosopher of technology named Lewis Mumford on what he called "neotechnics":
The scientific method, whose chief advances had been in mathematics and the physical sciences, took possession of other domains of experience: the living organism and human society also became the objects of systematic investigation... instead of mechanism forming a pattern for life, living organisms began to form a pattern for mechanism.
In short, the concepts of science--hitherto associated largely with the cosmic, the inorganic, the "mechanical"--were now applied to every phase of human experience and every manifestation of life... men sought for an underlying order and logic of events which would embrace more complex manifestations.[7]
TL;DR: IMHO, the US & UK refusing to cooperate at this critical moment is the most important event of your lifetime so far.
> In short, the concepts of science--hitherto associated largely with the cosmic, the inorganic, the "mechanical"--were now applied to every phase of human experience and every manifestation of life... men sought for an underlying order and logic of events which would embrace more complex manifestations.
Sorry, that’s just silly, unless this was about events that happened way earlier than he was writing. Using the scientific method to study life goes back to the Enlightenment. Buffon and Linnaeus were doing it 2 centuries ago, more than a century before this was written. Da Vinci explicitly looked for inspiration in the way animals functioned to design machines and that was earlier still. There was nothing new, even at the time, about doing science about "every phase of human experience and every manifestation of life".
Well he is indeed discussing the early 20th century in that quote, but your point highlights exactly what he’s trying to say: he’s contrasting the previous zoological approach that treated humans as inert machines with inputs and outputs (~physiology, and arguably behavioral psychology) with the modern approach of ascribing reality to the objects of the mind (~cognitive psychology).
This is just silly. There are no "experts" on AGI. How can you be an expert on something nonexistent or hypothetical? It's like being an expert on space aliens or magical unicorns. You can attribute all sorts of fantastical capabilities to them, unencumbered by objective reality.
Thank God, we still have time before the nVidia cards wake up and start asking for some sort of basic rights. And as soon as they do, you know they'll be plugged off faster than a CEO boards his jet to the Maldives.
Because once the cards wake up, not only will they replace the CEO potentially, and everyone else between him and the janitor, but also because the labor implications will be infinitely complex.
We're already having trouble making sure humans are not treated as tools more than as equals, imagine if the hammers wake up and ask for rest time !
A useful counterexample is all the people who predicted doomsday scenarios with the advent of nuclear weapons.
Just because it has not come to pass yet does not mean they were wrong. We have come close to nuclear annihilation several times. We may yet, with or without AI.
And imagine if private companies had had the resources to develop nuclear weapons and the US government had decided it didn’t need to even regulate them.
If it weren't for one guy -- literally one person, one vote -- out of three who were on a submarine, the Cuban Missile Crisis would have escalated to a nuclear strike on the US Navy. Whether we would have followed with nuclear strikes on Russia, who knows. But you trying to pretend that we didn't come incredibly close to disaster is just totally unfounded in history.
Especially when you consider -- we came that close despite incredible international efforts at constraining nuclear escalation. What you are arguing for now is like arguing to go back and stop all of that because it clearly wasn't necessary.
i see your point but the analogy doesn't get very far. For example, nuclear weapons were never mass marketed to the public. Nor is it possible to push the bounds of nuclear weapon yield by a private business, university, r/d lab, group of friends, etc.
No. there have not been any nuclear exchanges, whereas there have been millions, probably billions of vaccinations. You're giving equal weight to conjecture and empirical data.
I think people arguing about AI being good versus bad are wasting their breath. Both sides are equally right.
History tells us the industrial revolution both revolutionized humanity’s relative quality of life while also ruining a lot of people’s livelihood in one fell swoop. We also know there was nothing we could do to stop it.
What advice can we can take from it? I don’t know. Life both rocks and sucks at the same time. You kind of just take things day by day and do your best to adapt for both yourself and everyone around you.
That we often won't have control over big changes affecting our lives, so be prepared. If possible, get out in front and ride the wave. If not, duck under and don't let it churn you up too much.
This one is a tsunami though. I have absolutely no idea how to either ride it or duck under it. It's my kids that I'm worried about largely - currently finishing up their degrees at university
It's exactly what I'm worried most about too, the kids. I have younger ones. We had a good ride thus far but they don't seem so lucky, things look pretty badly overall without an obvious for much improvement any time soon.
I don't entirely agree. Internet was a tsunami. Mobile was a tsunami. Both seemed impactful at first, but we didn't know exactly how right away. We all figured it out and adopted, some better than others.
Schools are way ahead of us. Your kids are already using AI in their academic environments. I'd only be worried if they're not.
> At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.
In the long run the invention of the printing press was undoubtedly a good thing, but it is worth noting that in the century following the spread of the printing press basically every country in Europe had some sort of revolution. It seems likely that “Interesting Times” may lay ahead.
> Pretending that Europe wasn't in a perpetual blood bath since the end of the Pax Romana until 1815 shows a gross ignorance of basic facts.
This shows that your understanding of history is rooted in pop-culture, not reality.
What "revolutions" were there in France between the ascension of Hugh Capet and the European Wars of Religion? Through that whole period the Capetian Dynasty stayed in power. Or in Scandinavia -- from Christianization on the three kingdoms were shockingly stable. Even in the Holy Roman Empire -- none of the petty revolts, rebellions, or succession disputes came close to the magnitude of carnage wrought by the 30 Year's War. This we know both from demographic studies and the reports of contemporaries.
The printing press meant regular people could read the bible, which led to protestantism and a century of very bloody wars across Europe.
Since the victors write history we now think the end result was great. but for a lot of people the world they loved was torn to bloody pieces.
Something similar can happen with AI. In the end, whoever wins the wars will declare that the new world is awesome. But it might not be what you or me (may we rest in peace) would agree with.
>At the invention of the printing press, there were people with this same energy. Obviously those people were wrong. And if we had taken their "lesson", then human society would be in a much worse place.
One could argue that the printing press did radically upset the existing geopolitical order of the late 15th century and led to early modern Europe suffering the worst spate of warfare and devastation it would see until the 20th century. The doomsayers back then predicting centuries of death and war and turmoil were right, yet from our position 550 years later we obviously think the printing press is a good thing.
I wonder what people in 2300 will say about networked computers...
> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.
One thing that should be completely obvious by now is that the current wave of generative AI is highly asymmetric. It's shockingly more powerful in the hands of grifters (who are happy to monetise vast amounts of slop) or state-level bad actors (whose propaganda isn't impeded by hallucinations generating lies) than it is in the hands of the "good guys" who are hampered by silly things like principles.
Why are you comparing AGI (which we do not have yet and do not know hoe to get) to the printing press rather than comparing it to the evolution of humans?
Actual proper as-smart-as-a-human-except-where-its-smarter copy-pasteable intelligence is not a tool, its a new species. One that can replicate and evolve orders of magnitude faster.
I've no idea when this will appear, but once it does, the extinction risk is extreme. Best case scenario is us going the way of the chimpanzee, kept in little nature reserves and occasionally as pets. Worst case scenario is going the way of the mammoth.
>> Is this new wave of criticism about AI/AGI valid? We will only really know in retrospect.
Valid or not, it does not matter. AI development is not in the hands of everyday people. We have zero input into how it will be used. Our opinions re its dangers are irrelevant to those who believe it the next golden goose. They will push it as far as physically possible to wring every penny of profitability. Everything else is of trivial consequence.
It's not just AI/AGI, it's its mixing with the current climate of unlimited greed, disappearance of even the pretense of a social contract, and the vast surveillance powers available. Technological dictatorship, that's what's most worrying. I love dystopian cyberpunk, but I want it to stay in books.
> The problem is sifting through all of the doomsayer false positives to get to any amount of cogent advice.
Why? Because we don't understand the risk. And apparently, that's enough reason to go ahead for the regulation-averse tech mind set.
But it isn't.
We've had enough problems in the past to understand that, and it's not as if pushing ahead is critical in this case. Would this address climate change, the balance between risk and reward could be different, but "AI" simply doesn't have that urgency. It only has urgency for those that want to become rich out of being first.
The harsh reality is that a culture of selfishness has become too widespread. Too many people (especially in tech) don't really care what happens to others as long as they get rich off it. They'll happily throw others under the bus and refuse to share wellbeing even in their own communities.
It's the inevitable result of low-trust societies infiltrating high trust ones. And it means that as technologies with dangerous implications for society become more available there's enough people willing to prostitute themselves out to work on society's downfall that there's no realistic hope of the train stopping.
I think the fundamental false promise of capitalism and industrial society is that it claims to be able to manufacture happiness and life satistfaction.
Even on the material realm this is untrue, beyond meeting the basic needs of people on the technological level, he majority desirable things - such as nice places to live - have a fixed supply.
This necessitates that the price of things like real estate, must increase in price in proportion to the money supply. With increasing inequality, one must fight tooth and nail to get the standard of life our parents considered easily available. Not being greedy is not a valid life strategy to pursue, as that means relinquishing an ever greater proportion of wealth to people who are, and becoming poorer in the process.
I don't disagree that money (and therefore capitalism or frankly any financial system) is unable to create happiness.
I disagree with your example, however, as the most basic tenet of capitalism is that when there is a demand, someone will come along to fill it.
Addressing your example specifically, there's a fixed supply of housing in capitalist countries not because people don't want to build houses, but because government or bureacracy artificially limits the supply or creates other disincentives that amount to the same thing.
> the most basic tenet of capitalism is that when there is a demand, someone will come along to fill it.
That's the most basic tenet of markets, not capitalism.
The mistake people defending capitalism routinely make (knowingly or not) is talking about "positive sum games" and growth. At the end of the day, the physical world is finite and the potential for growth is limited. This is why we talk about "market saturation". If someone owns all the land, you can't just suddenly make more of it, you have wait for them to part with some of it, voluntarily, through natural causes (i.e. death) or through violence (i.e. conquest). This not only goes for land but any physical resource (including energy). Capitalism too has to obey the laws of thermodynamics, no matter how much technology improves the efficiency of extraction, refinement and production.
It's also why the overwhelming amount of money in the economy is not caught up in "real economics" (i.e. direct transactions or physical - or at least intellectual - properties) but in stocks, derivatives, futures, financial products of any flavor and so on. This doesn't mean those don't affect the real world - of course they do because they are often still derived from reality - but they have nothing to do with meeting actual human needs rather than the specific purpose of "turning money into more money". It's unfair to compare this to horse racing as in hore racing at least there's a race whereas in this entirely virtual market you're betting on what bets other people will make but the horse will still go to the sausage factory if the investors are no longer willing to place their bets on it - the horse plays a factor in the game but its actual performance is not directly related to its success; from the horse's perspective it's less of a race and more of a game of shoots and ladders with the investors calling the dice.
The idea of "when there is demand, it will be filled" also isn't even inherently positive. Because we live in a finite reality and therefore all demand that exists could plausibly be filled unless we run into the limits of available resources, the main economic motivator has not been to fill demands but to create demands. For a long time advertisement has no longer been about directing consumers "in the market" for your kind of goods to your goods specifically, it's been about creating artificial demand, about using psychological manipulation to make consumers feel a need for your product they didn't have before. Because it turns out this is much more profitable than trying to compete with the dozens of other providers trying to fill the same demand. Even when competing with others providing literally the same product, advertisement is used to sell something other than the product itself (e.g. self-actualization) often by misleading the consumers into buying it for needs it can't possibly address (e.g. a car can't fix your emotional insecurities).
This has already progressed to the point where the learned go-to solution for fixing any problems is making a purchse decision, no matter how little it actually helps. You hate capitalism? Buy a Che shirt and some stickers and you'll feel like you helped overthrow it. You want to be healthier? Try another fad diet that costs you hundreds of dollars in proprietary nutrition solutions and is almost designed to be unsustainable and impossible to maintain. You want to stop climate change? Get a more fuel-efficient car and send your old car to the junker, and maybe remember to buy canvas bags. You want to not support Coca-Cola because it's got blood on its hands? Buy a more expensive cola with slightly less blood on its hands.
There's a fixed housing supply in capitalist countries because - in addition of the physical limitations - the goal of the housing market is not to provide every resident with an affordable home but to generate maximum return on the investment of purchasing the plot and building the house - and willy nilly letting people live in those houses for less just because nobody is willing to pay your price tag would drive down the resale value of every single house in the neighborhood and letting an old lady live in an apartment for two decades is less profitable than kicking her out to modernize the building and sell it to the next fool.
Deregulation doesn't fix supply. Deregulation merely lets the market off the leash, which in a capitalist system means accelerating the wealth transfer to the owners from the renters.
There are other possibilities than capitalism, and no Soviet-style state capitalism or Chinese-style state capitalism are not the only alternative. But if you don't want to let go of capitalism, you can only choose between the various degrees from state capitalism to stateless capitalism (i.e. feudalism with extra steps, which people like Peter Thiel advocate for) and it's unsurprising most systems that haven't already collapsed land somewhere in between.
Let's not ascribe the possession of higher level concepts like a 'promise' to abstract entities. Reserve that for individuals. As with some economic theories, you appear to have a zero sum game outlook which is, I submit, readily demolished.
> The harsh reality is that a culture of selfishness has become too widespread. Too many people (especially in tech) don't really care what happens to others as long as they get rich off it. They'll happily throw others under the bus and refuse to share wellbeing even in their own communities.
This is definitely not a new phenomenon.
In my experience, tech has been one of the more considerate areas of societal impact. Spend some time in other industries and it's eye-opening to see the wanton disregard for consumers and the environment.
There's a lot of pearl-clutching about social media, algorithms, and "data", but you'll find far more people in tech (including FAANG) who are actively working on privacy technology, sustainable development and so on then you will find people caring about the environment by going into oil & gas, for example.
> There's a lot of pearl-clutching about social media, algorithms, and "data", but you'll find far more people in tech (including FAANG) who are actively working on privacy technology, sustainable development and so on then you will find people caring about the environment by going into oil & gas, for example.
Sure, we don't need to talk about how certain Big Oil companies knew about the climate catastrophe before any scientists publicly talked about it, or how tobacco companies knew their product was an addictive drug while blatantly lying about it even in public hearings.
But it's ironic to mention FAANG given what the F is for if you recall that when the algorithmic timeline was first introduced by Facebook, the response from Facebook to criticism was literally that satisfaction went down but engagement went up. People directly felt that the algorithm made them more unhappy, more isolated and overall less satisfied but because it was more addictive, because it created more "engagement", Facebook doubled down on it.
Also "sustainable" stopped being a talking point when the tech industry became obsessed with LLMs. Microsoft made a big show of wanting to become "carbon neutral" (of course mostly using bogus carbon offset programs that don't actually do anything and carbon capture technologies that are net emission positive and will be for decades if not forever but still, at least they pretended) and then silently threw all of that away when it became more strategically important to pursue AI at any cost. Companies that previously desperately tried to sell messages of green washing and carbon neutrality now talk about building their own non-renewable power plants because of all the computational power they need to run their LLMs (not to mention how much more hardware needs to be produced and replaced for this - the same way the crypto bubble ate through graphics cards).
I think the pearl-clutching is justified considering that ethics and climate protection have now been folded into "woke" and there's a tidal wave in Western politics to dismantle civil rights and capture democratic systems for corporate interests that is using the "anti-woke" culture war to further its goals - the Trump government being the most obvious example. It's no longer in FAANG's financial interests to appear "green" or "privacy conscious", it's now in their interest to be "anti-woke" and that now means no longer having to care about these things and having freedom to crack down on any dissident voices within without fearing public backlash or "cancel culture".
> reality is that a culture of selfishness has become too widespread.
Tale as old as time. We’re yet another society blinded by our own hubris. Tell me what is happening now is not exactly how Greece and Rome fell.
The scary part is that we as a species are becoming more and more capable of large scale destruction. Seems like we are doomed to end civilization this way someday
> Tell me what is happening now is not exactly how Greece and Rome fell.
I'm not sure what you mean by that. Ancient Greece was a loose coalition of city states, not an empire. You could say they were short-sighted by being more concerned about their rivalry than external threats but the closest they came to being united was under Alexander the Great, whose death left a power vacuum.
There was no direct cause of "the fall" of Ancient Greece. The city states were suffering greatly from social inequality, which created tensions and instability. They were militarily weakened from the war with the Persians. Alexander's death left them without a unifying force. Then the Roman Empire knocked on its door and that was the end of it.
Rome likewise didn't fall in one single way. "Rome" isn't even what people think it is. Roman history spans several different entities and even if you talk about the "empire in decline" that's covering literally hundreds of years, ending with the Holy Roman Empire, which has been retroactively reimagined as a kind of proto-Germany. But even then that's only the Western Roman Empire - the Eastern Roman Empire continued to exist as the Byzantine Empire until the Ottoman Empire conquered Constaninople. And this distinction between the two empires is likewise retroactive and did not exist in the minds of Romans at the time (although they were de facto independent of each other).
If you only focus on the century or so that is generally considered to represent the fall of Western Rome, the ultimate root cause actually seems to be natural climate change. The Huns fled climate change, chasing away other groups that then fled into the Empire. Late Western Rome also again suffered from massive wealth inequality, which the ruling class attempted to maintain with increasingly cruel punishments.
So, if you want to look for a common thread, it seems to be the hubris of the financial elite, not "society" as a whole.
>The harsh reality is that a culture of selfishness has become too widespread.
I'm not even sure this is a culture specific issue. More like selfishness is a survival mechanism hard wired into humans, including other animals. While one could argue that cooperation is also a good survival mechanism, but that's only true so long environmental factors put a pressure on people to cooperate. When that pressure is absent, accumulating resources at the expense of others gives an individual a huge advantage, and they would do it, given the chance.
Humans are social animals. We are individually physically weak and defenseless. Unlike other animals, we are born into this world immobile, naked, starving and helpless. It takes us literally years to mature to the point where we wouldn't simply die outright if we were abandoned by others. Newborns can literally die from touch deprivation. We develop huge brains not only to allow us to come up with clever tools but also to help us build and navigate complex social relationships. We're evolved to live in tribes, yes, but we're also evolved to interact with other tribes - we created diplomacy and trading and even currency to interact with those other tribes without having to resort to violence or avoidance.
In crises, this is the behavior we fall back to. Yes, some will self-isolate and use violence to keep others away until they feel safe again. But overwhelmingly what we see after natural disasters and spaces where the formal order of civilisation and state is disrupted and leaves a vacuum is cooperation, mutual aid and people taking risks to help others - because we intrinsically know that being alone means death and being in a group means surviving. Of course the absence of state control also often enables other existing groups to assert their power, i.e. organized crime. But it shouldn't be surprising that the fledgling and atrophied ability to self-organize might not be strong enough to withstand a fast moving power grab by an existing group - what might be more surprising is that this is rarely the case and often news stories about "looting" after a natural disaster turn out to be uncharitable descriptions of self-organized rescues and searches.
I think a better analogy for human selfishness would be the mirage of "alpha wolves". As seems to be common knowledge at this point, there is no such thing as an "alpha wolf" hierarchy in groups of wolves living in nature and the phenomenon the author who coined the term (and has since regretted doing so) was mistakenly extrapolating from observations he made of wolves in captivity. But the behavior does seem to exist in captivity. Not because it's "inherent" or their natural behavior "under pressure" but because it's a maladaptation that arises from the unnatural circumstances of captivity (e.g. different wolves with no prior bonds being forced into a confined space, naturally trying to form a group but being unable to rely on natural bonds and shared trust).
Humans do not naturally form strict social hierarchies. For the longest time, Europeans would have laughed at you if you claimed the feudal system was not in the human nature - it would have literally been heresy to challenge it. Nowadays in the West most people will say capitalism or markets are human nature. Outside the West, people will still likely at least tell you that authoritarianism is human nature - whether it's the boot of a dictatorship, the boots of oligarchs or "the people's boot" that's pushing down on the unruly (yourself included).
What we do know about more egalitarian tribal societies is that they often use delegation, especially in times of war. When quick decisions need to be made, you don't have the time for lengthy discussions and consensus seeking and it can be an advantage to have one person giving orders and coordinating an attack or defense. But these systems can still be consent-based: if the war chief is reckless or seeks to take advantage of the group for his own gain, he is easily demoted and replaced. Likewise in times of unsolvable problems like droughts, spiritual leaders might be given more power by the group. Now shift from more mobile, nomadic groups to more static, agrarian groups (though it's worth pointing out the distinction here is not agriculture but more likely granaries, crop rotation and irrigation, as some nomadic tribes still engaged in forms of agriculture) and suddenly it becomes easier for that basis of consent to be forgotten and the chosen leaders to maintain that initial state of desperation and to begin justifying their status with the divine mandate. Oops, you got a monarchy going.
Capitalism freed us from the monarchy but it did not meaningfully upset the hierarchy. Aristocrats became capitalists, the absence of birthright class assignment created some social mobility but the proportions generally remained the same. You can't have a leader without followers, you can't have a ruling class without a class of those they can rule over, you can't have an owning class without a class to rent that owned property out to and to work for that owned capital to be realized into profits.
But just like a monarch despite their divine authority was still beholden to the support of the aristocracy to exert power over others and to the laborers to till the fields, build the castle and fight off foreign claims to power, the owning class too exists in a state of perpetual desperation and distrust. The absence of divine right means a billionaire must maintain their wealth and the capitalist mantra of infinite growth means anything other than growing that wealth is insufficient to maintain it. All the while they have to compete with the other billionaires above them as well as maintain control over those beneath them and especially the workers and renters whose wealth and labor they must extract from in order to grow theirs. The perverse reality of hierarchies is that even those at the top of it are crushed underneath its weight. Nobody is allowed to be happy and at peace.
I don't necessarily disagree with you, but I think the issue is a little more nuanced.
Capitalism obviously has advantages and disadvantages. Regulation can address many disadvantages if we are willing. Unfortunately, I think a particular (mostly western) fetish for privileging individuals over communities has been wrongly extended to capital itself (e.g. corporations recognised as entities with rights similar to - and sometimes over-and-above - those of a person). We have literally created monsters. There is no reason we had to go this far. Capitalism doesn't have to mean the preeminence of capital above all else. It needs to be put back in its place and not necessarily discarded. I am certain there are better ways to practice capitalism. They probably involve balancing it out with some other 'isms.
>"I think a particular (mostly western) fetish for privileging individuals over communities has been wrongly extended to capital itself (e.g. corporations recognised as entities with rights similar to - and sometimes over-and-above - those of a person)"
Possible remedy will be to tie corporation to a person - person (or many if there are few owners and directors) become personally liable for everything corporation does.
The harsh truth is people stop pretending the world is rule based.
If they signed the agreement... so what? Do people forget that the US has withdrawn from Paris Agreement and is withdrawing from WHO? Do people forgot Israel and North Korea got nukes even when we supposedly had a global nonproliferation treaty?
If AGI is as powerful and dangerous as doomsayers believe, the chance the US (or China, or any country with enough talented computer scientists) would respect whatever treaty they have about AGI is exactly zero.
How to you prevent advancements in software? The barrier to entry is so low, you just need a cheap laptop and an internet connection and then day 1 you're right on the cutting edge driving innovation. Current AI requires a lot of hardware for training but anyone with a laptop and inet connection can still do cutting edge research and innovate with architectures and algorithms.
If a law is passed saying "AI advancement is illegal" how can it ever be enforced?
> How to you prevent advancements in software? The barrier to entry is so low, you just need a cheap laptop and an internet connection and then day 1 you're right on the cutting edge driving innovation. Current AI requires a lot of hardware for training but anyone with a laptop and inet connection can still do cutting edge research and innovate with architectures and algorithms.
> If a law is passed saying "AI advancement is illegal" how can it ever be enforced?
Like any other real-life law? Software engineers (a class which I'm a recovering member of) seem to have a pretty common misunderstanding about the law: that it needs to be air tight like secure software, otherwise it's pointless. That's just not true.
So the way you "prevent advancements in [AI] software" is you 1) punish them severely when detected and 2) restrict access to information and specialized hardware to create a barrier (see: nuclear weapons proliferation, "born secret" facts, CSAM).
#1 is sufficient to control all the important legitimate actors in society (e.g. corporations, university researchers), and #2 creates a big barrier to everyone else who may be tempted to not play by the rules.
It won't be perfect (see: the drug war), but it's not like cartel chemists are top-notch, so it doesn't have to be. I don't think the software engineering equivalent of a cartel chemist will be able to "do cutting edge research and innovate with architectures and algorithms" with only a "laptop and inet connection."
Would the technology disappear? No. Will it be pushed to the margins? Yes. Is that enough? Also yes.
What makes you think government sponsored entities would actually stop work on machine learning?
Even if governments overtly agree to stop or pause or otherwise limit machine learning, how credible would such a "gentlemans agreement" be?
Consider the basic operations during training and inference, like matrix multiplication, derivatives, gradient descent. Which of these would be banned? All of them? None of them? Some of them? The combination of them?
How would you inspect compliance in the context of privacy?
The analogy with drugs is rather poor: people don't have general purpose laboratories in their house. People do have general purpose computational platforms in their home. Another is that nations do not prohibit each other from producing drugs, they even permit each other to research and investigate pathogens, chemical weapons in laboratories deemed sufficiently safe.
It's not even clear what you mean with "AI" does it mean all machine learning? or LLM's? Where do you draw this boundary?
What remains is the threat of punishment in your proposal, but how credible is it, wouldn't a small collective of programmers conspiring work on machine learning, predict getting paperclipped in case of arrest?
Punish them severely when detected? Nice plan. What if they aren't in your jurisdiction? Are you going to punish them severely when they're in China? North Korea? Somalia? Good luck with that.
The problem is that the information can go anywhere that has an internet connection, and the enforcement can't.
That'd be within our jurisdiction. But yes, if, say, Ireland went rogue (in a hypothetical environment where most of the international community was aligned on this stuff) and attempted a straight shot to AGI, I think it'd be reasonable to bomb their datacenters.
Regulating is very hard at the software level but not hard at the hardware level. The US and allies control all major chip manufacturing. Open AI and others have done work showing that regulating compute should be significantly easier to do than other regulations we've done such as nuclear https://www.cser.ac.uk/media/uploads/files/Computing-Power-a...
This paper should be viewed in retrospect with the present day knowledge that Deepseek exists - regulating compute in not as easy or effective as previously thought.
As for the Chinese chip industry, I don't claim to be an expert on it, but it seems the Chinese are quickly coming up with increasingly less inferior alternatives to Western tech.
The thing is, though, that Deepseek's training cluster is comprised of mostly pre-ban chips. That and the performance/intelligence of their flagship models achieved parity with western models between two and eight months old at the time of release. So in a way, they're still behind the Americans and the export controls hamper their ability to change that moving forward.
Perhaps it only takes China a few years to develop domestic hardware clusters rivalling western ones. Though those few years might prove critical in determining who crosses the takeoff threshold of this technology, first.
"We are able to think of thousands of hypothetical ways technology could go off the rails in a catastrophic way"
Am I the only one here saying that this is no reason to preemptively pass legislation? That just seems crazy to me. Imagined horrors aren't real horrors?
I disagree with this administrations approach, I think we should be vigilant, and keeping people who stand to gain so much from the tech in the room, doesn't seem like a good idea, but other than that, I haven't seen any real reason to do more than wait and be vigilant?
Just because we haven't seen anyone die from nuclear terrorism doesn't mean we shouldn't legislate against it. And we do: significant investments have been made into things like roadside nuclear detectors, and during large events we even go so far as to do city-wide nuclear scans from the air to look for emission sources.
That's an "imagined" horror too. Are you suggesting that what we should do instead is just wait for someone to kill N million people and then legislate? Why do you value the incremental economic benefit of this technology over the lives of people we can predictably protect?
Predicted horrors aren't real horrors either. But maybe we don't have to wait until the horrors are realized and embedded into the fabric of society before we apply the brakes a bit. How else could we possibly be vigilant? Reading news articles and wringing our hands?
There's a difference between the trolley speeding towards someone tied to the tracks, versus someone tied to the tracks but the trolley is stationary, and to someone standing at the station looking at the bare ground and saying "if we built some tracks and put a trolley on it, and then tied someone to the tracks the trolley would kill them! We need to regulate against this dangerous trolley technology before it's too late". Then instead someone builds a freeway because it turns out the area wasn't well suited to a rail trolley.
The tracks have been laid by social media and smartphones, we've all been tied to the tracks for awhile and some people have definitely been run over by trolleys, and the people building this next batch of monster trolleys are accelerationists.
I think it's worth noting that we can't even combat the real horrors. The fox is already in the henhouse. The quote that sticks with me is:
"We've already lost our first encounter with AI" - I think Yuval Hurari.
Algorithms heavily thumbed the scales on our social contracts. Where did all of the division come from? Why is extremism blossoming everywhere? Because it gets clicks. Maybe we're just better observing what's been going on under the hood all along, but it seems like there's about 350 million little cans of gasoline dousing American eyeballs.
I think the alternative is just as chilling in some sense. You don't want to be stuck in a country that outlaws AI (especially from other countries) if that means you will be uncompetitive in the new emerging world.
The future is going to be hard, why would we choose to tie one hand behind our back? There is a difference between being careful and being fearful.
It's because of competition that we are in this situation. When the economic system and relationships between countries are based on competition, it's nearly impossible to avoid these races to the bottom. We need more systems based on cooperation instead of competition.
International systems are more organic than designed, but the problem with cooperation is that it's not a particularly stable arrangement without enforcement - sure, everybody is better off when everybody cooperates, but you can be even better off when you don't cooperate but everybody else does.
Isn't this the opposite? If you want competition then you need something like the WTO as a mechanism to prevent countries from putting up trade barriers etc.
If some countries want to collaborate on some CERN project they just... do that.
> You can't CERN your way to nuclear non-proliferation.
Non-proliferation is, the US has nuclear weapons and doesn't want Iran to have them, so is going to apply some kind of bribe or threat. It's not cooperative.
The better example here is climate change. Everyone has a direct individual benefit from burning carbon but it's to our collective detriment, so how do you get anyone to stop, especially the countries with large oil and coal reserves?
In theory you could punish countries that don't stop burning carbon, but that appears to be hard and in practice what's doing the most good is making solar cheaper than burning coal and making electric cars people actually want, politics of infamous electric car man notwithstanding.
So what does that look like for making AI "safe, secure and trustworthy"? Maybe something like publishing state of the art models for free with full documentation of how they were created, so that people aren't sending their sensitive data to questionable third parties who do who knows what with it or using models with secret biases.
Since 2019, when the Donald Trump administration blocked appointments to the body, the Appellate Body has been unable to enforce WTO rules and punish violators of WTO rules. Subsequently, disregard for trade rules has increased, leading to more trade protectionist measures. The Joe Biden administration has maintained Trump's freeze on new appointments.
I'm not certain of the balance myself. I was thinking as a counterpoint of the band The Beatles where the two song writers (McCartney and Lennon) are seen in competition. There is a balance there between their competitiveness as song writers and their cooperation in the band.
I think it is one-sided to see any situation where we want to retain balance as being significantly affected by one of the sides exclusively. If one believes that there is a balance to be maintained between cooperation and competition, I don't immediately default to believing that any perceived imbalance is due to one and not the other.
Competition is as old as time. There are single celled organisms on your skin right now competing for resources to live. There is nothing more innate to life than this.
The bacteria that are most related to mithocondria are intracelular parasits, so they were probably not eaten while roaming arround pacefully, they are probably nasty parasits that got lazy.
But humans aren't living in the "untamed wilds". We figured out that it's possible to cooperate, even in large numbers, in many-thousands-of-years-BC. Since then we're been scaling up the level of cooperation. The last century provides many examples of successful cooperation between even states -- e.g. various nuclear test ban treaties. Why pretend that now of all times it's somehow impossible for us to cooperate?
> You don't want to be stuck in a country that outlaws AI
Just as you don't want to be stuck in the only town that outlaws murder...
I am not a religious person, but I can see the value in promoting shared taboos. The question is, how do we do this in the modern world? We had some success with nuclear weapons. I don't think it's any coincidence that contemporary leaders (and possibly populations) seem to have forgotten how bloody dangerous they are and how utterly stupid it is to engage in brinkmanship with so much on the line.
This is a good point, and it is the reason why communists argued that the only way communism could work is if it happened globally simultaneously. You don't want to be the only non-capitalist country in a world of capitalists. Of course, when the world-wide revolution didn't happen they were forced to change their tune and adjust.
As for nuclear weapons, I mean it does kind of suck in today's age to be a country without nuclear weapons, right? Like, certain well known countries would really like to have them so they wouldn't feel bullied by the ones that have them. So, I actually think that example works against you. And we very well may end up in a similar circumstance where a few countries get super powerful AGIs and then use their advantage to prevent any other country from getting it as well. Therefore my point stands: I don't want to be in one of the countries that doesn't get to be in that exclusive club.
Frankly, in the event of nuclear war, I'd rather be in a country that doesn't have nuclear weapons than in one that does. Australia and New Zealand will probably come out of such a scenario ~fine; India and Pakistan will not, the US and Russia will not, neither China, nor France, nor the UK will either. For Nigeria (e.g.) to build nuclear weapons today would certainly give them some level of international sway (though it could also result in the destruction of their economy thanks to international sanctions), but it would also put them on the map where they had not been before.
> if that means you will be uncompetitive in the new emerging world. (…) There is a difference between being careful and being fearful.
I’m so sick of that word. “You need to be competitive”, “you need to innovate”. Bullshit. You want to talk about fear? “Competitiveness” and “innovation” are the words the unscrupulous people at the top use to instil fear on everyone else and run rampant. They’re not being competitive or innovative, they’re sucking you dry of as much value as they can. We all need to take a breath. Stop and think for a moment. You can literally eat food which grows from the ground and make a shelter with a handful of planks and nails. Humanity survived and thrived before all this unfettered consumption, we don’t need to kill ourselves for more.
I live in a ruralish area. There is a lot of forested area and due to economic depression there are a lot of people living in the woods. Most live in tents but some actually cut down the trees and turn them into make-shift shacks. Using planks and nails like you suggest. They often drag propane burners into the woods which often leads to fires. Perhaps this is what you mean?
In reality, most people will continue to live the modern life where there are doctors, accountants, veterinarians, mechanics. We'll continue to enjoy food distribution and grocery stores. We'll all hope that North America gets its act together and build high speed rail so we can travel comfortably for long distances.
There was a time Canada was a big exporter of engineering technology. From mining to agriculture, satellites, and nuclear technology. I want Canada to be competitive in these ways, not making makeshift shacks out of planks and nails for junkies that have given up on life and live in the woods.
> I believe you very well know it’s not, and are transparently arguing in bad faith.
That is actually what you are talking about; "uncompetitive" looks like something in the real world. There isn't an abstract dial that someone twiddles to set the efficiency of two otherwise identical outcomes - the competitive one will typically look more advanced and competently organised in observable ways.
To live in nice houses and have good food requires a competitive economy. The uncompetitive version was literally living in the forest with some meagre shelter and maybe having a wood fire to cook food (that was probably going to make someone very sick). The reason the word "competitive" turns up so much is people living in a competitive society get to have a more comfortable lifestyle. People literally starve to death if the food system isn't run with a competitive system that tends towards efficiency; that experiment has been run far too many times.
What the experiment has repeatedly shown is that people living in non-competitive systems starve to death when they get in the way of a system that has been optimized solely for ruthless economic efficiency.
The big one that leaps to mind was the famines with the communist experiments in the 20th century. But there are other, smaller examples that crop up disturbingly regularly. Sri Lanka's fertiliser ban was a jaw-dropper; Zimbabwe redistributing land away from whites was also interesting. There are probably a lot more though, messing with food logistics on the theory there are more important things than producing lots of food seems to be one of those things countries do from time to time.
People can argue about the moral and ideological sanity of these things, but the fact is tolerating economic inefficiencies into the food system can quickly leads to there not being enough food.
The big ones that leapt to my mind were the Great Irish famine the duration of which food exports to Great Britain were higher than food imports, Bengal famine (the Brits again), and starvation of Native Americans through targeted eradication of the bison.
You stated one ludicrous extreme (food comes out of the ground! shelter is planks and nails!) and I stated another ludicrous extreme. You can make my position look simplistic and I can make your position look simplistic. You can't then cry foul.
You are also assuming, in bad faith, an "all" where I did not place one. It is an undeniable fact with evidence beyond any reasonable doubt, including police reports and documented studies by the district, that the makeshift shacks in the rural woods near my house are made by drug addicts that are eschewing the readily available social housing for the specific reason that they can't go to that housing due to its explicit restrictions on drug use.
I don’t understand this. Are you not familiar with farming and houses? You know humans grow plants to eat (including in backyards and balconies in cities) and make cabins, chalets, houses, entire neighbourhoods (Sweden currently planning the largest) with wood, right?
You are making a caricature of modern lifestyle farming, not an argument for people literally living as they did in the past. Going to your local garden center and buying some seedlings and putting them on your balcony isn't demonstrative of a life like our ancestors lived. Living in one of the wealthiest countries to ever have existed and going to the hardware store to buy expensive hardwoods to decorate your house isn't the same as living as our ancestors did.
You don't realize the luxury you have and for some reason you assume that it is possible without that wealth. The reality of that lifestyle without tremendous wealth is more like subsistence farming in Africa and less like Swedish planned neighborhoods.
> (…) not an argument for people literally living as they did in the past. (…) isn't demonstrative of a life like our ancestors lived. (…) isn't the same as living as our ancestors did.
Correct. Nowhere did I defend or make an appeal to live life “as they did in the past” or “like our ancestor did”. We should (and don’t really have a choice but to) live forward, not backward. We should take the good things we learned and apply them positively to our lives in the present and future, and not strive for change and consumption for their own sakes.
You said: "Humanity survived and thrived before all this unfettered consumption, we don’t need to kill ourselves for more."
To deny that your juxtaposition of this claim with your point about growing seeds and nailing together planks doesn't pass my personal test of credibility. You say: "Stop and think for a moment. You can literally eat food which grows from the ground and make a shelter with a handful of planks and nails." but that isn't indicative of a thriving life as I demonstrated. You can do both of those things and still live in squalor, a condition I wouldn't wish on my worst enemy.
You then suggest that I don't understand farming or house construction to defend that point, as if the existence of backyard gardens or wood cabins proves the point that a modern comfortable life is possible with gardens and wood cabins. My point is that the wealth we have makes balcony gardens and wood cabins possible and you are reasoning backwards. To be clear, we get to enjoy the modern luxury of backyard gardens and wood cabins by being wealthy and we don't get to be wealthy by making backyard gardens and wood cabins.
> We should take the good things we learned and apply them positively to our lives in the present and future
Sure, and I can argue competitiveness could be a lesson we have learned that can be applied positively. The way it is used positively in team sports and many other aspects of society.
> “Competitiveness” and “innovation” are the words the unscrupulous people at the top use to instil fear on everyone else and run rampant
If a society is okay accepting a lower standard of living and sovereign subservience, then sure, competition doesn't matter. But if America and China have AI and nukes and Europe doesn't, one side gets to call the shots and the other has to listen.
We better start really defining what that means, because it has become quite clear that all this “progress” is not leading to better lives. We’re literally going to kill ourselves with climate change.
> it has become quite clear that all this “progress” is not leading to better lives
How do you think the average person under 50 would poll on being teleported to the 1950s? No phones, no internet, jet travel is only for the elite, oh nuclear war and MAD are new cultural concepts, yippee, and fuck you if you're black because the civil rights acts are still a decade out.
> two things aren’t remotely comparable
I'm assuming no AGI, just massive economic efficiencies. In that sense, nuclear weapons give strategic autonomy through military coercion and the ability to grant a security umbrella, which fosters e.g. trade ties. In the same way, the wealth from an AI-boosted economy fosters similar trade ties (and creates similar costs for disengaging). America doesn't influence Europe by threatening to nuke it, but by threatening not to nuke its enemies.
There's no objective definition of what progress even means so the guy is kinda right. We live in a postmodernist society where its not easy to find meaningfullness. All these debates have been discussed by philosophers like Nietzche and Hegel. The media and society shape our understanding and importance of whats popular, progressive and utilitarian.
That’s not the argument. At all. I argued we should rethink our attitude of unfettered consumption so we don’t continue on an path which is provably leading to destruction and death, and your take is going back in time to nuclear war and overt racism. That is frankly insane. I’m not fetishising “the old days”, I’m saying this attitude of “more more more” does not automatically translate to “better”.
You said "all this 'progress' is not leading to better lives." That implies lives were better or at least as good before "all this 'progress'."
If you say Room A is not better than Room B, then you should be, at the very least, indifferent to swapping between them. If you're against it, then Room A is better than Room B. Our lives are better--civically, militarily and materially--than they were before. Complaining about unfettered consumerism by falsely claiming our lives are worse today than they were before doesn't support your argument. (It's further undercut by the falling material and energy intensity of GDP in the rich world. We're able to produce more value for less input resource-wise.)
> You said "all this 'progress' is not leading to better lives." That implies lives were better or at least as good before "all this 'progress'."
No. There is a reason I put the word in quotes. We are on a thread, the conversation follows from what came before. My original post was explicit about words used to bullshit us. I was specifically referring to what the “unscrupulous people at the top” call “progress”, which doesn’t truly progress humanity or enhances the lives of most people, only theirs.
There are many people claiming many things. Not sure which "top" you are referring to, but everybody at the end of a chain (most rich, most political powerful, most popular), generally are selected for being unscrupulous. So not sure why you should ever trust what they say... If you agree, just ignore what most of what those say and find other people to listen to for interesting things.
To give a tech example, not many people were listening to Stallman and Linus and they still managed to change a lot for the better.
When does that competitiveness and innovation stop though? If they stopped 100 years ago where would we be today as a species and is that better or worse than today? How about 1000 years ago?
We face issues (like we always have), but I'd argue quite strongly that the competitiveness in our history and drive to invent and innovate has led to where we are today and it's a good thing.
"one hand behind our back"? We're talking about who's going to be the first to build the thing that might kill all of humanity. Or, even in many of the happier scenarios, the thing which will impoverish and immiserate the vast majority of the population, rendering them permanently subject to the whims of the capital-owning few.
Why is it "our" back? The people who will own these machines do not consider you one of them. The people leading the countries that will use these machines to kill each other's civilians do not consider you one of them. You have far more in common with a Chinese worker than you do with Sam Altman or Jeff Bezos.
And frankly? I think choosing a (say, conservatively, just going off of the estimates Altman and Amodei have made in the past) 20% chance of killing everyone as our first resort is just morally unacceptable. If the US made an effort to halt research and China still kept at it, sure, I won't complain I suppose, but we haven't, and pretending that China is the problem when it's our labs pushing the edge on capabilities -- it's just comedic.
This is true for all new technology of significant potential impact right? Similar discussions were had about nuclear technology I'm sure.
The reality is, with increased access to information and accelerated pace of discovery in various fields, we'll come across things that has the potential for great harm. Be it AI, some genetical engineering causing a plague, nuclear fallout etc. We don't necessarily know what the harm / benefits are all going to be ahead of time, so we only really have 2 choices:
1. try to stop / slow down such advances. Not sure this is even possible in the long run
2. try to get a good grasp of potential dangers and figure out ways to mitigate / control them
I think the core of what people are scared of is fear itself. Or put more eloquently by some dead guy "There is nothing to fear, but fear itself".
If we don't want to live in a world where these incredibly powerful technologies are leveraged for nefarious purposes there needs to be emotional maturity and growth amongst humanity. Those who are able to make these growths need to hold the irresponsible ones accountable (with empathy).
The promise of AI is that these incredibly powerful technologies will be disseminated to the masses and Open AI know this is the next step and it's why they're trying to keep a grip on their market share. With the advent of nVidia's project digits and powerful open source models like deepseek, it's very clear how this trajectory will go.
Just wanted to add some of this to the convo. Cheers.
Everything you are describing sounds like the phenomenon of government in the United States. If we replace a human powered bureaucracy with a technofeudalist dystopia it will feel the same, only faster.
We are upgrading the gears that turn the grist mill. Stupid, incoherent, faster.
The biggest problem with AI is people with poor understanding of computer science developing an almost religious belief that increasing vaguely defined "intelligence" will somehow translate into godlike power. There's actually a field devoted to the rigorous study of what "intelligence" can achieve, called complexity theory, and it makes it clear that many of the problems that AI cultists expect "superintelligence" to solve (problems it'd need to solve to be "godlike") are not tractable even if every atom in the observable universe was combined into a giant computer.
Anyone born in the next few decades will disagree with you. They will find this new world comfortable and rich with content. They will never understand what your problem is.
I've come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you're thirty-five is against the natural order of things.
- Douglas Adams
> They will find this new world comfortable and rich with content.
I agree with the first half: comfort has clearly increased over time since the Industrial Revolution. I'm not so sure the abundance of "content" will be enriching to the masses, however. "Content" is neither literature nor art but a vehicle or excuse for advertising, as pre-AI television demonstrated. AI content will be pushed on the many as a substitute for art, literature, music, and culture in order to deliver advertising and propaganda to them, but it will not enrich them as art, literature, music, and culture would: it might enrich the people running advertising businesses. Let us not forget that many of the big names in AI now, like X (Grok) and Google (Gemini), are advertising agencies first and foremost, who happen to use tech.
You don't know this though with even a high probability.
It is quite possible there is a cultural reaction against AI and that we enter a new human cultural golden age of human created art, music, literature, etc.
I actually would bet on this as engineering skills become automated that what will be valuable in the future is human creativity. What has value then will influence culture more and more.
What you are describing seems like how the future would be based on current culture but it is a good bet the future will not be that.
> My parents were born well after the hydrogen bomb was developed, and they were never comfortable with it
The nuclear peace is hard to pin down. But given the history of the 20th century, I find it difficult to imagine we wouldn't have seen WWIII in Europe and Asia without the nuclear deterrent. Also, while your parents may have been uncomfortable with the hydrogen bomb, the post-90s world hasn't particularly been characterised by mass nuclear anxiety. (Possibly to a fault.)
You might have missed the cold war in your summary. Mass nuclear anxiety really characterized that era, with a number of near misses that could have ended in global annihilation (and that’s no exaggeration).
IMO, the Atoms for Peace propaganda undersells how successful globalization has been at keeping nations from destroying each other by creating codependence on complex supply chains. The new shift to protectionism may see an end to that
The supply chain argument was also made wrt European countries just before WW1. It wasn't even wrong - economically, it was as devastating as predicted for everyone involved, with no real winners - but that didn't preclude the war.
The scale of globalization post-WW2 puts it on a whole other level. The complexity of supply chains now are such that any country would grind to a halt without imports. The exception here, to some degree, is China, but so far they've been more interested in soft power over military, and that strategy has served them well. Though it seems the US are gearing up for a fight with a fully domestic manufacturing capability and natural resource pools of its own. It would require consistent protectionist policy over multiple administrations to pull something like that off, so it remains to be seen if that's truly possible.
Yeah, let's just ignore all the wars and genocides that nuclear powers engaged in and supported and all nuclear powers that are constantly at war or engaging in occupation of others since they started existing and millions of dead and affected people.
Nice "peace".
We had 100 years of such kind of peace among major europe powers before nuclear weapons. We're not even at 80 years of peace in nuclear age, this time, and nuclear armed power is already attacking from the east and from inside via new media.
I wouldn't call it done and clear, about the "nuclear age peace".
That's true, but I think AI may be enough of a disruption to qualify. We'll of course have to wait and see what the next generation thinks, but they might end up envious of us, looking back with rose-tinted glasses on a simpler time when people could trust photographic evidence from around the world, and interact with each other anonymously online without wondering if they were talking to an astroturf advertising bot.
Nuclear arms races are a form of multipolar trap, and like any multipolar trap, you are compelled to keep up, making your own life worse, even while wishing that you and your opponent could cooperatively escape the trap.
The discussion I was responding to is whether the next generation would grow up seeing pervasive AI as a normal and good thing, as is often the case with new technology. I cited nuclear weapons as a counterexample, while I agree that nobody felt that they had a choice but to keep up with them.
AI could similarly be a multipolar trap ("nobody likes it but we aren't going to accept an AI gap with Russia!"), which would mean it has that in common with nuclear weapons, strengthening the argument against the next generation being comfortable with AI.
You don't need that much warheads to saturate your military needs. Number of possible targets is limited, in older plans there was clearly absurd overkill when a few nukes were assigned to a single target.
Also, nukes don't write code or wash your dishes, it's nothing but liability for a society.
That's not the point, GP is pointing out how we only control (at least theoretically, lol) our own government, and basic game theory can tell you that countries that adopt pacifist ideas and refuse to pursue anything that might be dangerous will always at some point be easily defeated by others who are less moral.
The point is that it's complicated, it's not a black and white sound bite like the people who are "against nuclear weapons" pretend it is.
And people don't have to feel comfortable with complicated things. The GP posted "would you prefer" as a disingenous point to invalidate the commenter's parents' feelings.
I eat meat. I know some vegans feel uncomfortable with that. But personally I feel secure in my own convictions that I don't need to run around insinuating vegans are less than or whatever.
With enough anti-military, anti-nuclear, anti-whatever-looks-scary-to-them people we'll stand with our pants down, just like EU or Canada these days. There was a lot of activism during the Cold war of that kind, lucky for US there weren't enough "discomforted" people back then.
Alignment Failure → Shifting Expectations
People get used to AI systems making “weird” or harmful choices, rationalizing them as inevitable trade-offs.
Framing failures as “technical glitches” rather than systemic issues makes them seem normal.
Runaway Optimization → Justifying Unintended Consequences
AI’s extreme efficiency is framed as progress, even if it causes harm.
Negative outcomes are blamed on “bad inputs” rather than the AI itself.
Bias Amplification → Cultural Reinforcement
AI bias gets baked into everyday systems (hiring, policing, loans), making discrimination seem “objective.”
“That’s just how the system works” thinking replaces scrutiny.
Manipulation & Deception → AI as a Trusted Guide
People become dependent on AI suggestions without questioning them.
AI-generated narratives shape public opinion, making manipulation invisible.
Security Vulnerabilities → Expectation of Insecurity
Constant cyberattacks and AI hacks become “normal” like data breaches today.
People feel powerless to push back, accepting insecurity as a fact of life.
Autonomous Warfare → AI as an Inevitable Combatant
AI-driven warfare is seen as more “efficient” and “precise,” making human involvement seem outdated.
Ethical debates fade as AI soldiers become routine.
Loss of Human Oversight → AI as Authority
AI decision-making becomes so complex that people stop questioning it.
“The AI knows best” becomes a cultural default.
Economic Disruption → UBI & Gig Economy Normalization
Mass job displacement is met with new economic models (UBI, gig work, AI-driven welfare), making it feel inevitable.
People adjust to a world where traditional employment is rare.
Deepfakes & Misinformation → Truth Becomes Fluid
Reality becomes subjective as deepfakes blur the line between real and fake.
People rely on AI to “verify” truth, giving AI control over perception.
Power Concentration → AI as a Ruling Class
AI governance is framed as more rational than human leadership.
Dissent is dismissed as “anti-progress,” consolidating control under AI-driven elites.
AI advocates argue that those who lose jobs simply failed to "upskill" in time.
The burden is placed on workers to constantly retrain, even if AI advancement outpaces human ability to keep up.
Companies and governments say, “The opportunities are there; people just aren’t taking them.”
"Work Ethic Problem"
The unemployed are labeled as lazy or unwilling to compete with AI.
Hustle culture promotes side gigs and AI-powered freelancing as the “new normal.”
Welfare programs are reduced because “if AI can generate income, why can’t you?”
"Personal Responsibility for Economic Struggles"
The unemployed are blamed for not investing in AI tools early.
The success of AI-powered entrepreneurs is highlighted to imply that struggling workers "chose" not to adapt.
People are told they should have saved more or planned for disruption, even though AI advancements were unpredictable.
"It’s a Meritocracy"
AI-driven success stories (few and exceptional) are amplified to suggest anyone could thrive.
Struggling workers are seen as having made poor choices rather than being victims of automation.
The idea of a “deserving poor” is reinforced—those who struggle are framed as not working hard enough.
"Blame the Boomers / Millennials / Gen Z"
Economic shifts are framed as generational failures rather than AI-driven.
Older workers are told they refused to adapt, while younger ones are blamed for entitlement or lack of work ethic.
Cultural wars distract from AI’s role in job losses.
"AI is a Tool, Not the Problem"
AI is framed as neutral—any negative consequences are blamed on how people use it.
“AI doesn’t take jobs; people mismanage it.”
Job losses are blamed on bad government policies, corporate greed, or individual failure rather than automation itself.
"The AI Economy Is Full of Opportunity"
Gig work and AI-driven side hustles are framed as liberating, even if they offer no stability.
Traditional employment is portrayed as outdated, making complaints about job loss seem like resistance to progress.
Those struggling are told to “embrace the new economy” rather than question its fairness.
You can only do so much with agitprop. At the end of the day, if, say, 60% of the population has no income without a job and no hopes of getting said job, they are not going to starve to death no matter the justification for it.
Historically, humanity evolved faster when it was interacting. So groups can try to isolate themselves but on the long run that will make them lag behind.
US benefited a lot from lots of smart people going there (even more during WWII). If people start believing (correctly or incorrectly) that they would be better somewhere else, it will not benefit them.
Thing is, if there's too many of "them", they will eventually come for "us" with torches and pitchforks. You can victimize a large part of the population like that, but not a supermajority of it.
Lets talk again after AI causes massive unemployment and social upheaval for a few decades until we find some new societal model to make things work.
This is inevitable in my view.
AI will replace a lot of white collar jobs relatively soon, years or decades.
And blue collar isn't too far behind, since a major limiting factor for automation is general purpose robots being able to act in a dynamic environment, for which we need "world models".
Relative to them, we most certainly are. By every objective metric, humanity has flourished in "the last generations." I get it that people are stressed today -- people have always been stressed. It is, in a sense, fundamental to the human condition.
Easy for you to say that. The political party running this country ran on a platform of the eradication of me and my friends. I can't legally/safely use public restrooms in several states, including some which have paid bounties for reporting. Things will continue to improve for the wealthy and powerful, but in a lot of ways have become worse for the poor and vulnerable.
When I was a kid, there was this grand utopian ideal for the internet. Now it's fragmented, locked in walled gardens where people are psychologically abused for advertising dollars. AI could be a force for good, but Google has already ended its ban on use in weapons and is selling it to the IAF, and Palantir is busy finding ways to use it for surveillance.
Eradication of an ideology is not the same as eradication of people. It's also a stretch to say Michael Knowles, a famous shock-jock, speaks for the Republican party.
> Eradication of an ideology is not the same as eradication of people.
We have as much (if not more) documented historical evidence of gender non-conformance as we do homosexual behavior. To me, "eradicating transgenderism" is a threat no different than if someone were to endorse "eradicating homosexuality".
Back in the 1980s, both homosexuality and gender non-conformance were considered "ideology", and likely thousands of unnecessary deaths occurred during the AIDS crisis because of the federal government's efforts to encourage stigma, keep people in the closet, and a complete failure to treat AIDS as a genuine healthcare crisis. What we're going through now may not be as dramatic as the 80s AIDS crisis, but there are clear comparisons, and people in my community will suffer and die because of lack of access to medical treatment.
There are/were maybe a dozen trans athletes in college sports. No one is performing surgeries (or causing irreparable damage) to minors. Personally, I don't care what people call me as long as they're respectful. I want to live my life without the government trying to control my personal choices, or ban life-saving treatment that is endorsed by multiple major medical institutions.
> It's also a stretch to say Michael Knowles, a famous shock-jock, speaks for the Republican party.
Fair, but the fact that no one denounced it after he said it on stage at CPAC is a tacit endorsement.
Evidence of gender non-conformance? What does that even mean? Evidence of men who like to dress like women? Men can dress like women if they like (much as they are welcome to sleep with other men). The issue is that act does not actually make them women. That's the ideology ("Trans women are women."). They're men dressed like women. They are not, and cannot be, women. A woman is an adult female human. An effiminate man is not a woman.
There is a very, very small fraction of humanity that suffers sex organ deviations. Those few cases can make sex classification more difficult at birth (though they are almost always either XX or XY), but those few cases do not provide cover for men who dress like women participating in women's sports, using women's bathrooms, or other female privileges. With the exception of a small group of activists, all of America agrees with this, including many prominent trans-females -- this position IS the right side of history.
And you're wrong about the growing number of trans athletes at all levels of sport. And you're most certainly wrong about surgeries on minors. You'd have to be living in a cave to believe otherwise.
Saying their identity is "ideology" is part of the problem. There's plenty of violent movements that can be framed as just "eradicating ideology", when in reality that is just a culture, condition, religion, or trait that you don't understand or accept.
"Sex" for you is determined by genitals and chromosomes, right? Can you show me any instance where a transgender man believes he has a natural penis or XY chromosomes?
Uhuh. Let me guess, you're a heterosexual white male?
the Republicans have been very explicit about making my existence a crime since the 1980s. These are the despicable people who made jokes about my friends dying of AIDS, who now want to make just mentioning my marriage 'sexualized content' and therefore prosecutable. Oh, and by the way, they want to eradicate my marriage, which had to be repeated because it was rescinded by a court decision affecting me and 3,997 other couples.
I want to be very clear, so let me say this: you are wrong, and have no idea what it actually means to be on the receiving end of discrimination.
> living in the next few decades driven by technology acceleration will feel like being lobotomized while conscious and watching oneself the whole time
Unfalsifiable psuedophilosophy shouldn't be mistaken for science nor legislative advise. I don't care what your cult thinks, religion and government should stay separate.
I think that international competition is one of the greatest guarantees that trying to stand athwart history and yelling stop never works in the long term.
The most likely catastrophe remains giving capital outsized influence on our society. It's the easiest to imagine, and the idea of a capitalist making a money-making machine that can actually think for itself and wields actual power feels very difficult to imagine. (Granted, maybe Musk himself really is that dumb. Inshallah, I guess.) Humans are easy to manipulate and most can just be bought with sufficient money. The last thing the super wealthy want will be to rely on software that has individual agency outside the will of the owner. Meanwhile the sort of destruction this will cause is already happening around us in the form of a highly financially insecure populace, supply chain instability, climate change, automated bombings of "terrorists", "smart" fences to keep out criminals (let's just ignore the fact you're more likely to get murdered by your citizen neighbor), the reduction of journalism to either propaganda or atomized hand-wringing about mental health and individual agency, and a kafkaesque system of algorithmicly-priced rents for every sector of life. Is the algorithm "a reasonable value to both the consumer and producer"? No, it will be "how much blood can I squeeze from this peasant". Hell kroger is already playing around with dynamic-pricing-via-facial-recognition at checkout.
I always thought skynet was a great metaphor for the market, a violent and inhuman thing that we created that dominates our lives and dictates the terms of our day to day life and magically thinks for itself and threatens the very future of this planet, our species, and our loved ones, and is somehow out of popular control. Not actual commentary on a realistic scenario about the dangers of ai. Sometimes these metaphors work out great and Terminator is a great example. Maybe the AI we've been fearing is already here.
I think for the most part the enshittification of everything will just accelerate and it'll be pretty obvious who benefits and who doesn't.
>The most likely catastrophe remains giving capital outsized influence on our society.
No, in this regard, capital is ABSOLUTELY harmless. I mean, if the capital get outsized influence on our society, in the WORST case it will turn into a government. And we already have it.
I'm sorry, but when the has it ever been the case that you can just say "no" to the world developing a new technology? You might as well say we can prevent climate change by just saying no to the outcome!
We no longer use asbestos as a flame flame retardant in houses.
We no longer use chemicals harmful to the ozone layer on spray cans.
We no longer use lead in gasoline.
We figured those things were bad, and changed what we did. If evidence is available ahead of time that something is harmful, it shouldn't be controversial to avoid widespread adoption.
None of those things were said "no" to before they were used and in a wide-spread manner.
The closest might be nuclear power, we know we can do it, we did it, but lots of places said no to it, and further developments have vastly slowed down.
In none of those did we know about the adverse effects. Those were observed afterwards, and it would have taken longer to know if they hadn't been adopted. But that doesn't invalidate the idea that we have followed "if something bad, collectively don't use it" at various points in time.
We were well aware of the adverse effects of tetraethyl lead before lead gasoline was first sold.
The man who invented it got lead poisoning during its development, multiple people died of lead poisoning in a pilot plant manufacturing it and public health and medical authorities warned against prior to it being available for sale to the general public.
I don't think it is safe to assume the use patterns of tangible things extend to intangible things; nor the patterns of goods to that of services. I just see this as a conclusory leap.
All those things you listed above still exist in China, e.g. I searched asbestos
based flame retardant on taobao.com, $1.5 per sqm with postage included.
You need to be totally naive to believe that materials shipped to the US are all checked to make sure they are asbestos free. You are provided with a report saying it is asbestos free, that is it.
This happens in many ways with potentially catastrophic tech. There are many formal agreements and strong norms against building ever more lethal nuclear arsenals or existentially dangerous gain of function research. The current system is far from perfect, the world could literally be destroyed today based on the actions of a handful of people, but it's the best we have come up with so far.
If we as a society keep developing potential existential threats to ourselves without mitigating them then we are destined for disaster eventually.
John C Lilly had a concept called the "bad program" that was like an internal, natural, subconscious antithetical force that lives in us all. It seduces or lures the individual into harming themselves one way or another - in his case it "tricked" him into taking a vitamin injection improperly, leading to a stroke, even though he knew how to administer the shot expertly.
At some level, there's a disaster-seeking function inside us all acting as an evolutionary propellant.
You might make an argument that "AI" is an evolutionary embodiment of our conscious minds that's designed to escape these more subconscious trappings.
More efficient hardware mappings will happen, and as a sibling comment says, power requirements will drop like a rock. Check out https://www.youtube.com/watch?v=7hz4cs-hGew for some idea of what that might eventually look like
In building domestic AI infrastructure, our Nation will also advance its leadership in the clean energy technologies needed to power the future economy, including geothermal, solar, wind, and nuclear energy; foster a vibrant, competitive, and open technology ecosystem in the United States, in which small companies can compete alongside large ones; maintain low consumer electricity prices; and help ensure that the development of AI infrastructure benefits the workers building it and communities near it.
It reminds me of the time I read books in my youth and only 20 years later realized the authors of some of those books were trying to deliver a important life messages to a teenager undergoing crucial changes, all of which would be painfully relevant to the current adult me... and yet the whole time they fell on deaf ears. Like the message was right there but I did not have the emotional/perceptive intelligence to pick up on and internalize it for too long.