Well, the Great Fiscal Crisis did not result in mass starvation, so in some sense, it "wasn't all that dramatic". But, it was a big deal in comparison to a normal downturn. So, it depends on what you mean.
Medium term, I think it would release a lot of resources (skilled workers, productive capacity, energy) to use on something more productive. But then, I kind of hoped for that after the GFC, also...
In time, every organic system has to develop an immune system. Immune systems do, sometimes, misfire (allergies, auto-immune disorders). But, eventually, it becomes impossible to do without them.
Many years ago, as a grad student in Electrical Engineering, I got asked to help judge at a high school science fair. It was fairly disillusioning. The best presentations were pretty obviously done with a lot of parental "help", or otherwise were presenting an experiment designed by adults (this was clear from questioning). It was more like competitive science homework, than a bunch of science experiments.
I'm a government statistician, and a private researcher I'd worked with asked me to give a talk at a STEM charter school about to start their science fair. She asked me to focus on the reports and data tools the state publishes, so I used them as the "middle step" in some hypothetical science projects (e.g., which has a bigger effect on the rate of heart disease deaths, race or wealth?). I explained that these data couldn't replace a controlled experiment, but they were invaluable for the most important part of the scientific process: genuine attempts to disprove your own idea.
I felt good about the presentation, and then the Q&A started, with the researcher (who was smiling the whole time) joining in more and more. I quickly understood the kids didn't plan, and weren't being encouraged, to do anything like the scientific process. They wanted to pull some of the data from our tools, draw a few chats, add a little commentary, and smack it on poster board. I even attended the science fair event, and saw too many exhibits with screenshots of our website and what amounted to status reports. Reports that can be automated.
Yeah it was super obvious who got help from parents and who did everything on their own. I did FIRST Robotics in highschool and that was another place where it was obvious which teams got a lot of help from their parents and which teams were entirely run by the students. We got some help from sponsors for stuff like welding aluminum frames but it was entirely our design. I remember the team from Cocoa Beach Highschool looked like a NASA designed rover (which it probably was).
There should be an adult category. Lets get some genuine citizen science going. Let the kids see that science doesn't need to be confined to corporate or university labs.
1) agreed by the look of everyone’s comments we need to rephrase some things in the onboarding, apple review made us change it to be explicit as possible.
2) yes this is a great idea!! ‘You’re the first in the world to discover this’!! Thank you for that!!
"This is what we have to look forward to? Journals filling up with this bilge, this useless wordy debris? Gosh, that's going to work out great with all those machine-learning algorithms for collating human knowledge - just wait until we pour these buckets of sludge into them."
He could also have pointed out that LLMs are the only readers for these articles, as well.
Well, it would depend greatly on which real estate you mean. There are empty cities in China which testify that real estate can, in fact, be very risky, but most real estate is not like that.
Lots of other places where it is certainly not safe, but in the end you can at least use it as a place to live (unlike the Chinese ghost cities), whereas Bitcoin does not have any intrinsic use (even T-bills can be used to pay government debt such as taxes).
As others have pointed out, it also runs up less in percentage terms during the resurgence. If, as you posit, Bitcoin is becoming less volatile, then the question will be what the demand looks like when it is no longer a way of getting exposure to potential large upside. In other words, a lot of people have bought it in the hope that it will double or more. How many will buy it in the hope that it doesn't go down? Perhaps it will happen, but it's not obvious.
In the limit, that would mean that Bitcoin's volatility reflects only the savings rate of people.
That would mean that Bitcoin is pure monetary value. In that case, it would suck out all the monetary premium from other assets like real estate, equities and gold. The monetary premium in those is probably a few hundred trillion. So by that time, Bitcoin's price should be 2 orders of magnitude higher than today.
"JPMorgan calculated last fall that the tech industry must collect an extra $650 billion in revenue every year — three times the annual revenue of AI chip giant Nvidia — to earn a reasonable investment return. That marker is probably even higher now because AI spending has increased."
That pretty much tells you how this will end, right there.
Nvidia invests $100bn in OpenAI, who buy $100bn of Nvidia chips, who invest the $100bn revenue in OpenAI, who buy $100bn in Nvidia chips, and round it goes. That's an easy $600bn increase in tech industry revenue right there.
am i wrong to think that this sketch isn't at all applicable (besides the surface level joke of money being passed around) to the ai bubble or ""modern banking"" as mentioned in the youtube comments? i keep seeing it referenced like it is an explanation of some crazy conspiracy thing, so i don't know if i just don't get it.
GDP measures exchange of money with the assumption that this is a proxy for actual economic output... except cases like this where monetary exchanges like this which accomplish nothing.
i assume the joke is that the three stooges are too stupid to realize that because they all owe each other $20 there is no debt, and they show this by accident by passing a $10 dollar bill around until it ends up back with larry. it says nothing about GDP.
Yes, but they were under the 1099 reporting limits, while they both owe taxes on them, neither were required to report it to the IRS... assuming this is the one and only time they paid each other for services rendered.
The replies explain all there is to explain in that example. If each economist thinks that eating shit is worth $100 then, well, that's what it's worth.
It is fascinating that someone can tell an obvious joke with an obvious point, where the characters themselves spell out what’s wrong, and yet we can be certain someone will genuinely believe and defend that “no no, actually eating a random pile of shit you found on the floor makes sense and is worth it”.
Has it occurred to you, especially since one of the economists in the joke admits they feel they ate shit for nothing, that they actually do not feel the exercise was worth it? Have you never spent money on something, thinking it would be worth it, then afterwards realised it was a waste of money? Have you also never taken a job and then realised “I didn’t charge enough for the trouble”?
I’m reminded of a bit of news I heard a while back, where one teenager challenged a friend to eat rat shit they found on the street. The eater died shortly after, because the poop contained rat poison. I doubt any of them found it worth it.
It's not an obvious joke. It seems closer to a puzzle in that the reader must discover that the $100 was for entertainment. This is a common class of puzzle where money changes hands between two people and results in a surprising conclusion.
A better gist may be the value of entertainment is temporary.
If instead it was just one person and they went to a movie theater. If you ignore the entertainment value it may just look like the person through away the admission cost.
It's a political joke that uses a rhetorical sledge hammer to make it impossible to defend a particular principle. Is it so surprising that someone will still defend the principle?
More like not everyone agrees with the point of the joke. They didn't eat shit for nothing, the watching economist paid for the entertainment, why else would they have offered to pay to watch them if not for the entertainment? It's really no different to getting offered money to do a dare. The fact that they felt bad about it later is irrelevant, when the money was initially offered they both felt that they did get value from the act.
It is equivalent under scrutiny, but casually looking the books and seeing Nvida making a sale to Nvida sticks out like a sore thumb a lot more than Nvida making a sale to OpenAI. The latter is much more likely to pass as revenue.
Total US GDP is ~31 trillion, so that's only like 5%. I think it's conceivable that AI could result in ~5% of GDP in additional revenue. Not saying it's guaranteed, but it's hardly an implausible figure. And of course it's even less considering global GDP.
Yup. If you follow the links to the original JP Morgan quote, it's not crazy:
> Big picture, to drive a 10% return on our modeled AI investments through 2030 would require ~$650 billion of annual revenue into perpetuity, which is an astonishingly large number. But for context, that equates to 58bp of global GDP, or $34.72/month from every current iPhone user...
But think about it this way: something simple like Slack charges $9/month/person and companies already pay that on many behalf. How hard would it be to imagine all those same companies (and lots more) would pay $30/month/employee for something something AI? Generating an extra $400 per year in value, per employee, isn't that much extra.
$35/head is possible but it has to provide tangible value to the user (beyond coding) which many pro-AI people will fail to recognize. People pay a lot for other stuff (ie: like their phone plan). Being digital or physical is not the issue here but the value perceived by the user.
The world IS responsible for handling the people. Thats the whole fucking reason we made society to take care of children. Nothing is inevitable. It serves the interests of the few.
I think they meant “society.” Society does, in fact, owe the people something, especially if we, the people, are expected to live by the rules, social norms, and expectations imposed by society.
Parent was talking about children (npi) — they don’t get out of society what they put into it. Society owes them care for bringing them into it, and if society defaults on this debt then society ends.
What your describing is a low trust society. If you disregard the social contract like that, then people wont owe the "the world" anythign either. Collaboration and civics goes out the window. If you want to look at what kind of a shithole that libertarian nonsense leads to, then try taking a stroll in SF at night
This is an important framing - we talk so much of "rights" but if you have a right to something, that means someone or someones have a duty to provide it.
No, no it does not. If we say everyone has a right to clean air and water, no one else has a duty to provide it. Those are given to us for free by the planet. The issue is that rich assholes (and poor assholes who only think of getting rich) take that away from everyone else by polluting what is common to everyone.
> I don't consider that to be saying that society "owes" me something. I regard it mutually beneficial, not some kind of debt/debtor relationship.
You know, in phrases like "you owe it to your spouse/sibling/friend/self to...", people aren't talking about formal debt. Please try to keep that kind of meaning in mind when people say that society owes its people.
humans collectively are responsible for the end results of innovations and achievements , otherwise who are you doing all this for. Wars are a extreme form of disagreements amongst a large body of opposing opinions or perspective IMHO. Earth (world!) simply exists, with or without you. You as Byorganism/Byproduct of this planet you have an obligation to this planet in good deeds. Have you not watched Star-Wars?
> * it’s a safe bet that labor will have lower value in 2031 than it has today
If AI makes workers more productive, labor will have higher value than it has today. Which specific workers are winning in that scenario may vary tremendously, of course, but I don't think anyone is seriously claiming AI will make everyone less productive.
The value of labor i.e. wages depend on labor demand (the marginal product of labor) and bargaining power, not output per worker. If AI is a substitute for many tasks, the marginal value of an additional worker, and what a company is willing to pay for their work can fall even if each remaining worker is more productive.
What you're forecasting is a scenario where total output has substantially increased but no one's hiring or able to start their own business. Instant massive recession is by no means a "sure bet" with technological improvements, especially those that make more kinds of work possible than before.
I'm not forecasting that, and it's a virtual strawman in the face of my much narrower claim: that wages depend on marginal labor demand and bargaining power, not average output per worker. If AI substitutes for labor, the marginal value of adding another worker in many roles can fall. That can mean fewer hires or lower wages in some categories, not 'no hiring' or an instant massive recession. I have no idea what the addressable market or demand for our more productive economy is, but for the record I do hope it's high to support new businesses and a bigger pie in general!
> My statement reflects that increased productivity means that fewer people are required to generate the same amount of economic output.
People have been singing that since the industrial revolution started.
What makes you think it's different this time? Other times increased productivity yielded fewer people doing what a machine suddenly can do. But never fewer people employed or smaller overall economy.
You can argue that our populations are older than ever before. There aren't enough kids, and consumers are saturated with consumption opportunities.
That's maybe never happened before during the industrial revolution. But it's orthogonal to AI.
Most people in the economy do not use Slack. That tool may be most beneficial to those people who stand to lose jobs to AI displacement. Maybe after everyone is pink-slipped for an LLM or AI chatbot tool the total cost to the employer is reduced enough that they are willing to spend part of the money they saved eliminating warm bodies on AI tools and willing to pay a higher per employee price.
I think with a smaller employee pool though it is unlikely that it all evens out without the AI providers holding the users hostage for quarterly profits' sake.
That AI will have to be significantly preferable to the baseline of open models running on cheap third-party inference providers, or even on-prem. This is a bit of a challenge for the big proprietary firms.
> the baseline of open models running on cheap third-party inference providers, or even on-prem. This is a bit of a challenge for the big proprietary firms.
It’s not a challenge at all.
To win, all you need is to starve your competitors of RAM.
RAM is the lifeblood of AI, without RAM, AI doesn’t work.
HBF is NAND and integrated in-package like HBM. 3D XPoint or Optane would be extremely valuable today as part of the overall system architecture, but they were power-intensive enough that this particular use probably wouldn't be feasible.
(Though maybe it ends up being better if you're doing lots of random tiny 4k reads. It's hard to tell because the technology is discontinued as GP said, whereas NAND has kept progressing.)
They will pay it but lay off the number of employees needed to balance it out, and just expect the remaining ones to make up for it with their new AI subscriptions.
This is true though I think even if the employer provides all this on a per employee basis, the number of eligible employees, after everyone who stands to lose a job because of a shift to AI tools, will be low enough that each employee will need to add a lot of value for this to be worth it to an employer so the stated number is probably way too low. Ordinary people may just migrate from Apple products to something that is more affordable or, in the extreme case, walk away from the whole surveillance economy. Those people would not buy into any of this.
This is true but unfortunately for Apple I don't buy anything from the app store except for a minimal iCloud subscription for temporary photo storage. I am in the process of unwinding that subscription in favor of local storage and periodic sync. I haven't been diligent about syncing things in the past so I did buy a subscription for photo storage to avoid losing photos. I know that lots of people buy apps for all kinds of things. I'm not one of those people though.
That's far larger than the population of the USA (unclear to me if that 650bb number is global or USA only) but by sheer scale this is assuming that these companies can collect that fee from a global customer base - including users in developing economies, EU, China, etc. and after the middleman fees are accounted for.
The comments in this thread seem to be thinking within the context of 'the poorest in their nation'. This calculation assumes collecting this fee from among 'the poorest in the world'.
Sure, 1.56bb users could also be interpreted as 'the wealthiest 20% of the world'. But the tail is especially long on this curve given how wealth is concentrated in a small percentage of the global population (1% of users have 50% of wealth).
Microsoft, Google, Apple, Amazon, Nvidia, etc have been able to collect large amounts of revenue from a global customer base so I don't think the assumption was that unreasonable.
Obviously, China will protect its homegrown AI industry. Current geopolitics trending towards US decoupling in Europe might slow it. But under the old status quo, US AI would have been rapidly adopted in the EU (and it still might. It depends greatly on how much of the Trump Doctrine outlasts the current administration).
Developing countries eventually adopt new technologies. First they adopted personal computers and became customers of Microsoft, then they adopted the Internet and became customers of Google, they adopted smartphones and became customers of Apple. Eventually they will adopt AI and become customers of someone. The question is whether it will be US tech or Chinese tech.
Personally I would be astonished if LLMs percolating through the global economy doesn’t give a 50bp bump from here on out.
Even if scaling hit a wall, commoditizing what we have now would do it. We have so much scaffolding and organizational overhang with the current models, it’s crazy.
Agreed. Applying the intelligence we already have more broadly will have a huge impact. That's been true for a while now, and it keeps getting more true as models keep getting better.
It's conceivable to us working in white collar knowledge jobs where our input and output is language. Will it also make 5% more homes built by a carpenter?
It might provide cover to lay off more than 5% of us (the LLM can create a work-like text product that, as far as upper management can tell, is indistinguishable from the real thing!), then we will have to go find jobs swinging hammers to build houses. Well, somebody’s got to do it.
The idea that companies need "cover" to perform layoffs (particularly in the US) doesn't make sense to me. Tech companies, all companies lay people off regularly. (To a first order approximation) if a worker is a net positive to a company then the company will want to keep them, and if they are not then the company will want to get rid of them. AI or no AI.
I’ve seen many essential people being laid off for stupid reasons, the gp reason above being part of the story for some. Finance runs the world not tech. Tech is only welcome when it helps finance else it is marginalized.
Seems like the cover might be for investors. If a company is shrinking but you don't want investors to know it's shrinking, you can say you're improving productivity with AI.
That seems pretty reasonable, yes. That is like asking if putting a low-cost Ops Research specialist in every company could make a 5% difference in operations - yes it could. Making resource-efficient decisions is not something that comes naturally to humans and having a system that consistently makes high quality game-theoretic recommendations would be huge.
Bunch of tiny companies would love to hire a mathematician to optimise what they are doing to get a 5-10% improvement. Unfortunately a 5-10% improvement in a small business can't justify the cost of hiring another person, and good mathematicians with business sense and empathy are a rare commodity.
Lots of jobs like daycare, teachers, cleaning, the material costs are near zero and your ability to increase productivity using technology is very low.
You can reduce quality of cleaning. But it's very hard to clean faster and better at the same time.
These industries are not going to be optimized by an AI. The only optimization is lower overhead or lower salaries.
Sure, we could have robots in daycare, but I don't think lack of AI is why my wife would have concerns :)
Of course there's jobs that don't have a productivity boost from AI. The question is whether across the entire economy there will be a 5% GDP boost.
Teachers, cleaners, and daycare workers may see 0% gains, but don't be surprised if that is made up for by 10% gains the productivity of tech, law, marketing, advertising, manufacturing, government, etc. (okay maybe not government).
How can advertising and marketing become more profitable from this? It's a genuine question, but I don't see how making advertising and marketing easier for everybody and hence flooding the already flooded market would result in increased productivity.
By significantly reducing the cost of creating the advertisements. Want to air a commercial? You no longer have to have actors, sets, designers, costumes, etc. just ask AI to make you a commercial and describe what you want it to look like.
Consider all the labor and capital spent across all the advertising real estate in the world. Commercial, online ads, billboards, labeling. The inputs to make all these things are now greatly reduced. To increase productivity, it doesn't matter that the market is flooded, just that it's much easier to make these things.
If that seems reasonable to you then you don't know anything about residential construction. The problems that homebuilders face aren't amenable to mathematical solutions. They have to deal with permitting issues, corrupt / incompetent government officials, supplier delays, bad weather, flakey workers, etc. The notion of a 5% improvement from LLM is ludicrously naive.
The first 2 are very LLM amenable, the last 3 are very mathematical-solution amenable (optimising around issues like that is basically what Ops Research does). I don't see what your argument is here.
The list of people claiming that maths won't work who then get bulldozed by mathematicians is long.
Because they make it much easier to audit what decisions are being made and how reasonable they were. Corruption relies on not being too well known - once people can start pointing to specific decisions rather than a general "we know there is corruption here somewhere" it is hard to sustain.
It's not like people don't know who they are though? It's not some secret formula of who is corrupt. It's everyone that's been in position for any length of time. If you don't yield to the corruption you won't be in your job long. The degree of corruption is variable and perhaps the LLM could find the most efficient wheel to grease and person to lean on but then you just have the next company doing more of the same.
Given how much of the spending is hard goods and simply not AI-able (rent, most of housing new construction, most of other goods, most health care, much of other services), the replacement theory would require a massive displacement.
It cannot be sustained with just one-time growth. Capital always has to grow, or it will decrease. If this bubble actually manages to deliver interest, this will lead to the bubble growing even larger, driving even more interest.
The chart you listed is for the years before the CCP won the civil war in 1949. But agreed that many of the problems overcome were also problems that were created after the war.
Japan controlled much more of China than the communists did before 1945. And having half your country occupied is bad for GDP. You made a mistake and believed some propaganda here.
Chinese GDP was higher during WWII than over the next several years, the actual minimum 1959 to 1961 was well into communist rule. Literally CCP rule was worse than the anarchy of civically war, it’s right up there with the insanity of Pol Pot.
This so historically stupid claim, it's not even wrong tier.
There was no GDP data under KMT - it wasn't even formally calculated.
CCP started GDP calculations, but using soviet MPS GDP accounting system that basically omitted services and lowballed production prices.
The only GDP data we have that is pseudo normalized are via estimates like Maddison project. Even they don't bother to recompose China/KMT data during WW2. The TLDR is prewar peak 1939 data (right before JP invasion) around 288B, PRC took over in 1949, GDP was 245B in 1950, grew to 306B by 1952. GLF tanked GDP from 460b to 350B... i.e. the worst case scenario of GLF floor was still 40% larger than 1950.
E: Note wiki data links to ourworldindata that pulls from Maddison and in table form KMT/WW2 data is not available and only pulling from closest data point 1938/1950 and naively extrapolating per capita. Because KMT data doesn't exist.
GDP isn’t just some arbitrary abstraction it’s the amount of goods and services produced by an economy.
At the low end of economic output starvation or the lack thereof is a strong indication of GDP. You do need to adjust for exports and imports but you don’t need to have a particularly deep insight into the economy beyond that.
Of course GDP is an arbitrary abstraction, it's literally derived from arbitrary systems of measurement, i.e. why soviet had mps system and west had sna, and each get to decide what to value and how much... arbitrarily... and even when they calculate, a lot of it is guestimate because no one has perfect or even good data, especially 80 years ago in developing countries.
> starvation or the lack thereof is a strong indication of GDP
No that's just an indicator that some cohort starved due to distribution failure. And to be blunt... that cohort was rural / peasants doing mostly subsistence agriculture tier production that do not count much towards GDP. An urban worker in industry can generate 10x GDP surplus than farmers in a commune.
Hence starvation (mostly in rural) has disproportionately less GDP weight vs urban worker productivity. An economy losing millions of peasants while still modernizing/industrializing can easily maintain higher total GDP than peaceful agrarian society. AKA CCP speed running first 5 year plan post WW2 raised the GDP floor so much that they can unalive 10s of millions of peasants and still have higher GDP vs pre/post war which, was incidentally also not peaceful agrarian society, but even messier interregnum shitshow with significantly shit state capacity than relatively unified postwar PRC under CCP. Republican Era KMT (during anarchy/civil war) simply couldn't organize fragmented China to be as productive as PRC under CCP, who can lose millions of peasants with marginal productivity of labour near zero and still do massively better in gdp/economic terms.
Between 1954 and 1959, China supplied 160,000 tons of tungsten ore, 110,000 tons of copper, 30,000 tons of antimony, and 90,000 tons of rubber to the Soviet Union. That’s how they repaid a loan not through industrial production because their economy wasn’t producing significant high value output from raw materials, they couldn’t even smelt ore efficiently.
Re-education camps don’t generate value. They didn’t have a surplus of urban workers instead Mow just destroyed the economy. Killing off the educated doctors etc isn’t a free action, it has negative consequences.
China literally had net migration out of cities, so no this wasn’t over investment in industry or a distribution issue this was just abject failure and total economic collapse. Total Anarchy would have been better for the economy than Mao.
Both the Soviet and Chinese first few five year plans accomplished the following:
1. Mass starvation at a few points due to central planning errors
2. Horrifying purges and paranoia that cannot be excused as "errors"
3. Achieving mass literacy and a partially industrial economy in a single generation, from a medeival starting point.
Most good Americans who paid attention in civics class learned 1 and 2 very well without truly appreciating 3.
You have to understand that they were coming from a peasant economy where nobody could even read. It's an accomplishment despite Mao's shortcomings and awful deeds. And look at the scoreboard today. Highest GDP by purchasing power parity in the world. Xiaomi cars are nicer than Teslas, only non-American tech industry, high speed rail, etc etc.
There’s a long list of countries that industrialized more quickly without suffering such internal economic issues. The USSR and China suffered because of poor governance not industrialization.
Second, Mass literacy occurs via teaching kids. It has little to do with what the wider economy as seen by both modern and historic literacy rates.
It’s been 65 years since the Chinese famine, what actually fixed the country was economic reforms. MAO’s death helped but the system simply didn’t work so they tried something else.
Not only would total anarchy been worse for economy than Mao, you would struggle to find another developmental model that did as well as Mao. Especially the only comparable size peer, India who objectively did worse, under most developmental metrics.
Between 1954/ 1959 PRC exchanged material for capita goods and Soviet training speed run industrialization. AKA they were turning surplus rocks they couldn't process into machines so they can process non export into capita stock. You know, developing. This economic/history 101.
Mao even including GLF engineered one of the greatest most condensed human uplift effort. World Bank summary of CCP progress from postwar to 70s, i.e. under Mao noted how PRC, relative to developing pears was significantly more industrialized, like 40% vs low income avg 25% share of economy. With matching proxy indicators like 3x energy consumption per capita vs India, 2x literacy, 1/3 infant mortality rate. aka Mao speedrun PRC to middle income industrial levels - GLF one step back, 5 step forward success. State provided services were also assessed to be far more effective in meeting basic needs vs low income peers. Life expectancy 65yrs vs 50yrs (India) for low income... "outstandingly high" in WB remark. WB concludes CCP efforts by late 70s... again Mao's doing left "low-income groups far better off in terms of basic needs than their counterparts in most other poor countries"... "most remarkable achievement during the past three decades".
All the subsequent snowballing from Deng, not possible without Mao building a captive, mobile, diciplined rural workforce with high industrial experience, reeducating masses to be fungible workers for migrant economy.
In retrospect, GLF in fact, close to free action. Post WW2 PRC was so devoid of talent that Mao could depopulate cities and slap doctors around with trivial long term penalty option. Starting proper industrialization, mass mobilizing low end barefoot doctors alone out state capacities GLF/CR missteps and saved more lives than it bled. i.e. even in terms of mortality vs death averted, Mao comes out massively ahead. That +15 years above baseline life expectancy x 1000 billion new births is about ~200m lives worth. This not accounting averted deaths of countries who started similarly but did not poverty / malnutrition alleviate early enough, i.e. India generating GLF deaths every few years over decades. That averted another 200m deaths. Most of this attributed to Mao speedrunning nation building did actually solve famine after GLF via all the infra built. Something that historically every Chinese polity had to worry about.
Any leader who improved HDI for as much people in as short of a time as Mao would have been given a Nobel Economics Prize and Nobel Peace Prize. Fixating on spike of deaths at PRC scale is boring libtard innumeracy, i.e. ~4% which plenty of leaders of matched/exceeded. Not nice but completely valid to treat human resources as resource and trade for long term gains. Mao increased PRC industrial output by like 30x, from macro economic utilitarian, HDI trend line goes up, PRC brrrting growth, dead peasants and sad elites simply doesn't fucking matter, it's minor shock to overall system capacity which Mao built so much in so fast that it raised aggregate Chinese HDI above most peers even if it also broke a few millions of eggs.
> Between 1954/ 1959 PRC exchanged material for capita goods and Soviet training speed run industrialization.
This wasn’t an exchange of good this was a subsidy. Loan repayments at extremely generous 1% interest rates. The use of raw materials shows just how poorly their efforts where despite the aid.
You can try and repaint history into a history of pulling themselves up, but the reality is they had a high literacy rate and for the time period a well functioning economy before the communists took over. Afterwards 50 million people starved to death that’s not progress that’s horrifying inefficiency writ large.
The CCP still has a hate boner for Taiwan because it shows they are objectively doing a bad job as that fragment of the same country still has a higher standard of living and better technology despite the massive disadvantages of vastly smaller economies of scale.
Last reply to more ahistoric cope, the exchange of raw materials was because postwar PRC had nothing else to barter, you know because incumbent KMT fucked it up.
Chinese literacy rates was fucking pre CCP, it was agrarian nation that CCP uplifted. If you want to cope with repainting history, go accuse world bank... in the 80s, by every metric except human lives, CCP was horrifyingly efficient, precisely because they value human lives less.
What techstack does TW have that PRC doesn't? TSMC based off foreign tech stack. Let's not forget ROC is also outcome of subsidy / finance program by US. The difference between PRC and ROC is PRC sugar daddy was poor USSR, TW was rich US, and population scale means US could injected more to smaller pop to bring up development. All while US+co sanctioning PRC btw, hence PRC succeeded where TW has not, and did so on hard mode.
Smaller economies of scale is precisely why TW/ROC is unimpressive, TW should be much richer for how small it is and how lavishly it was rewarded. There's reason TW has to literally ban TWners from working in PRC high end industries... because PRC tier1 opportunities has vastly exceeded TW.
Even in a society of 1 person that person would prefer to live in a mud hut than outside getting rained on. Ignoring imputed rent ignores that value and therefore is objectively wrong.
Did China really do it though? We can clearly see that China has achieved huge economic growth since Deng Xiaoping took control. But the specific numbers can't be attempted to be believed. Communist Party officials at every level heavily manipulate the official economic data to meet their annual goals and no independent auditing is allowed.
By pulling ten million people a year from farms into factories and ploughing 40% of GDP into infrastructure and education. Sounds like a sound analogy to me.
They're for those within the population that are willing to submit themselves to the whim of the state and whose prosperity in some way directly benefits the oligarchs that run the state.
Certainly, as just a few examples, they are not for the well-being of the Uyghar population or pro-democracy activists or journalists investigating human rights violation or supporters of Tibetan independence.
It is not a hardly implausible figure given the wide range of human economic activity, that's a common flaw in economic thinking when small percentages of huge numbers seem realistic ("my business plan is modest and will achieve a tiny 0.01% of the global market to become one of the biggest companies in the world, very plausible")
Tech companies never last. Apple will miss a disruptive innovation or make a key strategic error causing them to lose their dominant spot. Look at the top tech companies 50 years ago: how are they doing today?
Is like the transition from monarchies to nation states.
By the 19th century, the rise of nation-states accelerated due to the spread of nationalism, the decline of feudal structures, and the unification of countries like Germany (1871) and Italy (1861). Centralized governments, uniform laws, national education systems, and a sense of collective identity became defining features. The French Revolution (1789) played a pivotal role by promoting citizenship, legal equality, and national sovereignty over dynastic rule
Maybe in 2300 they'll say something similar about nationalism
I’m sorry, but 5% of GDP is an absurd figure. You’re saying $1 out of every $20 that does anything in our economy should be on AI? That seems insane to me.
The tech industry going through a boom and settling back down at a higher place than before isn't the end of the world. They all start merging together soon.
I am more afraid if AI will actually deliver what CEOs are touting. People that are now working will be unemployable and will have to pivot to something else, overcrowding these other sectors and driving wages down.
If that comes to pass you will work the same or more for less money than now.
Basically jump back to a true plutocracy since only a few people will syphon the wealth generated by AI and that wealth will give them substantial temporal power.
I mean, I just dont see any evidence of that happening. TBF I'm a SWE so I can only speak to that segment, but its literally worse than useless for working with anything software related thats non-trivial...
I see that sentiment here all the time and I don't understand what you must be doing; our projects are far from non trivial and we get a lot of benefit from it in the SWE teams. Our software infra was alway (almost 30 years) made to work well with outsourcing teams, so maybe that is it, but I cannot understand how you can have quite that bad results.
Butting in here but as I have the same sentiment as monkaiju: I'm working on a legacy (I can't emphasize this enough) Java 8 app that's doing all sorts of weird things with class loaders and dynamic entities which, among others, is holding it in Java 8. It has over ten years of development cruft all over it, code coverage of maybe 30-40% depending on when you measure it in the 6+ years I've been working with it.
This shit was legacy when I was a wee new hire.
Github Copilot has been great in getting that code coverage up marginally but ass otherwise. I could write you a litany of my grievances with it but the main one is how it keeps inventing methods when writing feature code. For example, in a given context, it might suggest `customer.getDeliveryAddress()` when it should be `customer.getOrderInfo().getDeliveryInfo().getDeliveryAddress()`. It's basically a dice roll if it will remember this the next time I need a delivery address (but perhaps no surprises there). I noticed if I needed a different address in the interim (like a billing address), it's more likely to get confused between getting a delivery address and a billing address. Sometimes it would even think the address is in the request arguments (so it would suggest something like `req.getParam('deliveryAddress')`) and this happens even when the request is properly typed!
I can't believe I'm saying this but IntelliSense is loads better at completing my code for me as I don't have to backtrack what it generated to correct it. I could type `CustomerAddress deliveryAddress = customer` let it hang there for a while and in a couple of seconds it would suggest to `.getOrderInfo()` and then `.getDeliveryInfo()` until we get to `.getDeliveryAddress()`. And it would get the right suggestions if I name the variable `billingAddress` too.
"Of course you have to provide it with the correct context/just use a larger context window" If I knew the exact context Copilot would need to generate working code, that eliminates more than half of what I need an AI copilot in this project for. Also if I have to add more than three or four class files as context for a given prompt, that's not really more convenient than figuring it out by myself.
Our AI guy recently suggested a tool that would take in the whole repository as context. Kind of like sourcebot---maybe it was sourcebot(?)---but the exact name escapes me atm. Because it failed. Either there were still too many tokens to process or, more likely, the project was too complex for it still. The thing with this project is although it's a monorepo, it still relies on a whole fleet of external services and libraries to do some things. Some of these services we have the source code for but most not so even in the best case "hunting for files to add in the context window" just becomes "hunting for repos to add in the context window". Scaling!
As an aside, I tried to greenfield some apps with LLMs. I asked Codex to develop a minimal single-page app for a simple internal lookup tool. I emphasized minimalism and code clarity in my prompt. I told it not to use external libraries and rely on standard web APIs.
What it spewed forth is the most polished single-page internal tool I have ever seen. It is, frankly, impressive. But it only managed to do so because it basically spat out the most common Bootstrap classes and recreated the W3Schools AJAX tutorial and put it all in one HTML file. I have no words and I don't know if I must scream. It would be interesting to see how token costs evolve over time for a 100% vibe-coded project.
Copilot is notoriously bad. Have you tried (paid plans) codex, Claude or even Gemini on your legacy project? That's the bare minimum before debating the usefulness of AI tools.
"notoriously bad" is news to me. I find no indication from online sources that would warrant the label "notoriously bad".
https://arxiv.org/html/2409.19922v1#S6 from 2024 concludes it has the highest success rate in easy and medium coding problems (with no clear winner for hard) and that it produces "slightly better runtime performance overall".
> Have you tried (paid plans) codex, Claude or even Gemini on your legacy project?
This is usually the part of the pitch where you tell me why I should even bother especially as one would require me to fork up cash upfront. Why will they succeed where Copilot has failed? I'm not asking anyone to do my homework for me on a legacy codebase that, in this conversation, only I can access---that's outright unfair. I'm just asking for a heuristic, a sign, that the grass might indeed be greener on that side. How could they (probably) improve my life? And no, "so that you pass the bare minimum to debate the usefulness of AI tools" is not the reason because, frankly, the less of these discussions I have, the better.
I'm saying this to help you. Whether you give it a shot makes no difference to me. This topic is being discussed endlessly everyday on all major platforms and for the past year or so the consensus is strongly against using copilot.
If you want to see if your project and your work can benefit from AI you must use codex, Claude code or Gemini (which wasn't a contender until recently).
> This topic is being discussed endlessly everyday on all major platforms and for the past year or so the consensus is strongly against using copilot.
So it would be easy to link me to something that shows this consensus, right? It would help me see what the "consensus" has to say about the known limitations of Copilot too. It would help me see the "why" that you seem allergic to even hint at.
Look, I'm trying to not be close-minded about LLMs hence why I'm taking time out of my Sunday to see what I might be missing. Hence my comment that I don't want to invest time/money in yet-another-LLM just for the "privilege" of debating the merits of LLMs in software engineering. If I'm to invest time/money in another coding LLM, I need a signal, a reason, to why it might be better than Copilot for helping me do my job. Either tell me where Copilot is lacking or where your "contenders" have the upper-hand. Why is it a "must" to use Codex/Claude/Gemini other than trustmebro?
I couldn't tell you because I've kept it at arms length but over the last year our most enthusiastic "AI guy" (as well as another AI-user on the team) has churned through quite a few, usually saying something like "$NEW_MODEL is much better!" before littering garbage PRs all over the project.
I don't know how you can write down those numbers and come to the conclusion they sound reasonable at all. Corporations literally can't give this trash away for free without consumers being unhappy about it (eg. the Copilot malware infesting every aspect of Windows). ChatGPT had 800m MAU at one report, but that's a chat interface and free. Do you really believe over half of those users are going to convert from "free" to paying $60/mo for access to the chat interface, when all potential applications for actually improving their lives are failing badly? I think you are out of touch with the finances of non-tech-indsutry workers if you think they will.
> ChatGPT had 800m MAU at one report, but that's a chat interface and free. Do you really believe over half of those users are going to convert from "free" to paying $60/mo for access to the chat
Even if these things worked great for everyone, the percent of free uses who convert to paid users is low single digits per cent. For OpenAI to have any chance of breaking even in the consumer space, they need to develop an ad biz that makes around 20-25% of G does. That's a tall order in that G doesn't make good dough from search anymore as SERP page clicks are down 80% with AI summaries being good enough for most.
And let's not forget that for the bubble to sustain itself, people would currently use different LLMs would need to create a separate account in each one. There's absolutely no way most people will be paying more than one LLM unless they have a lot of disposable income.
Just like all feemium, it’s supported by power users.
I pay for gpt myself, and my work pays for Copilot, GPT, Claude, cursor, Glean, and other enterprise tools. And we make enterprise tools on top of AI that our customers pay extra for.
Averaging the revenue over headcount isn’t the right model, anymore than it would be for RIOT games or YouTube.
I don't know a single person in my (non-tech!) life that doesn't use AI, shy of toddlers and geriatric people.
The famous MIT study (95% of AI initiatives fail, remember that one?) actually found that pretty much every worker was using AI almost daily, but used their personal accounts (hence the corporate ones not being used).
If you are brand new to the tech world, and this is your first new product cycle, the way it works is that there is a free-cool-we're-awesomely-generous phase, and then when you are hooked and they are entrenched, the real price comes to fruition. See...pretty much every tech start-up burning runway cash.
Right now they are getting us hooked, and like the dumbasses consumers are, they will become totally dependent and think it will stay this cheap.
I use AI frequently. I am frequently let down. Occasionally satisfied and very rarely impressed. My results seem typical for everyone else I know. It's a free and widely promoted tool that has the potential to be useful, of course people will use it. The features I find most useful, is not providing me new knowledge. It's formalizing something I wrote or summarizing some other text, that I am going to read anyway or can at least reference as needed and confirm the output. This is also where the local models Excel.
I also often see people post AI generated advice and answers that are simply incorrect in Facebook groups and get roasted with 100s of people chiming in on how you can trust ChatGPT.
I just can't see regular people are going to pay more than (NetFlix + HBO + Prime + WM+) for an AI subscription. I think you would see tons of competitors pop up if that were at all viable.
"If you are brand new to the tech world, and this is your first new product cycle, the way it works is that there is a free-cool-we're-awesomely-generous phase, and then when you are hooked and they are entrenched, the real price comes to fruition. See...pretty much every tech start-up burning runway cash."
That has indeed been the strategy, but it's not like it always or even usually works out. We've seen plenty of companies that try to raise their prices and people aren't hooked. (Though I am almost certain in this case at least professionals if not the general public will indeed be hooked.)
> actually found that pretty much every worker was using AI almost daily
What they found is that people search the Internet for things and an AI bot is right there. What they didn't find is people using Vibe coded apps, learning from AI or buying AI services. They did find companies buying AI services, but as an experiment. Also, blaming AI is easy when someone messes up and costs a customer or sale. The more that happens, the sooner the company stops experimenting. If that happens in a widespread way, then this bubble collapses.
A good way to think about it is that ChatGPT is well on its way to becoming a verb like Google did. Doesn't roll off the tongue as easily but in terms of brand awareness it feels ubiquitous.
If you really don't know a single such person, you live in a very odd bubble. I know lots of people who used ChatGPT a lot when it first came out, found it funny and occasionally useful, then changed their mind to just finding it funny occasionally, and then eventually stopped because it wasn't that useful and was no longer funny.
None of them ever considered getting a paid account, nor would they have. I'm not saying nobody will, but if you actually don't know any such people then there is something unusual about the crowd you run with.
No, they won't be. Inference costs will continue to drop, and subscription prices will follow as AI is increasingly commoditized. There are 6 different providers in the top 10 models on openrouter. In a commoditized market, there will be no $60/month subscriptions.
If I understood you correctly and my math is correct, your suggestions only cover 6% of 650 billion, the news is suggesting AI companies need more than 10x more. So either it's 5 billion people paying 60-80, or 500 million people paying 600-800/month, or something in between + a little extra.
Consumer spending is strong and growing, don't listen to dregs milking upvotes on the internet, people will easily come up with 4-5 hours of minimum wage pay in a month to cover the cost of the thing they use many times a day.
I don't use AI for anything in my private life, only at work. And I can't really imagine what it could do for me. In no scenario am I paying a monthly subscription for it.
> "must collect an extra $650 billion in revenue every year"
paired with the idea that automating knowledge work can cause a short-term disruption in the economy doesn't seem logical to me.
I find it funny that Microsoft is scaling compute like crazy and their own products like Copilot are being dwarfed by the very models they wish to serve on that compute.
If 1 or 2 of the 5 big spenders starts having big losses, things will be interesting. Their market caps will be a fraction of the current overinflated values.
Meanwhile Apple is only spending 1 billion a year to use Google's models.
That's not unreasonable, but if you can't do it without losing money then there's going be a problem.
The problem isn't that AI/LLMs can't be useful or generate revenue, the problem is still the cost. We're no where near production ready AI, it can sort of do coding and some medical stuff, but we're not at a level of technology where the potential is fully realized. How much are investors willing to pour into more research?
We looking at OpenAI contemplating ads and erotic chatbots. That's not a successful business who have those ideas for profit generation.
But who's going to be buying any of these products if everyone is out of a job? Other rich people?
Sure I assume there's a good market there, luxury yachts exist after all, but what is a company like Netflix going to do when people are too poor to even afford the streaming services that cost 10 bucks a month?
Not to get conspiratorial, but the only logical thing for me here is that They want as many of the plebs dead as possible so that the remainder of us are beholden to them and their money, once they own all the AI factories.
It's crazy to me how many flags are being thrown in this investment spree. Repeating the same mistakes as before (2000). Big companies will be hit hard when they can't show for what they spent shareholders money on. The run will be large and impactful.
If you analyze what's happening right now in the tech industry, you can't help but to think of something deeper than what's being talked about in plain sight. There is a clear panic amongst the large tech firms. the root cause of the panic is still unclear, simply saying these companies want to be the first in this new revolution isn't enough to draw conclusion. Amongst the top tech firms there still sits the original founders whom as we all know changed the way we live life today. Saying they are misunderstanding what's happening right now and they're are foolish, is to simple of an understatement. They of all people in the world would know it's a bad idea to go all in, in this manner. The underlying competing nature of this whether it has to do with China or other competing markets are not being talked about, and not just that " what exactly is the strategy here?"
> They of all people in the world would know it's a bad idea to go all in, in this manner.
Or this kind of financial crash is exactly what they want. If they can drive the markets to failure, only the largest companies can hold on - and acquire more of the failing companies in the process.
Day-by-day it seeming this way. They seem to want to flush out the remaining competitors. dedicators are old news, umbrella corporation(tm) is the new form of totalitarian/authoritarianism.
The issue is that every company in a position to do so is trying to stake a claim in a new market. Not every company will win. No-one has a surefire way of identifying "mistakes" ahead of time.
What alternative do you think would work better, short of central planning?
I read "Devil Take the Hindmost: A History of Financial Speculation" last year, and the current AI bubble is like getting a front row seat to the next edition being written.
The really stupid bubbles end up getting themselves metastasized into the public retirement system, I'm just waiting for that to start any day now.
The question is not "is it a bubble". Bubbles are a desirable feature of the American experiment. The question is "will this bubble lay the foundation for growth and destroy some value when it pops, or will it only destroy value"
I have been off work for over 6 months now. I have been doing so many projects, and exploring so many places, working out, eating healthy, learning, and spending very little money doing so. I actually even quit smoking pot after doing it daily for 10 years. It's been amazing, and I'd rather never go back to work. I don't get how people can get so bored. There's so much to do and see.
From my lived experience you are an outlier. Potentially an extreme one at that.
Where I grew up the people who didn’t work almost universally turned into consumers of everything and creators of basically nothing. The exceptions were retirees who had a lifetime if work experience prior to their idle years. For those folks it was gardening and other similar hobbies that provided meaning but not much output for society as a whole.
I think if you offered the entire population the ability to do no work other than what they felt like doing, exceedingly few people would be motivated to do the needful. A few more would be motivated to do things like create art and otherwise contribute back to other people but I am thinking along the lines of the 80/20 rule here.
I think our future if we ever figured out automation and UBI looks a lot like Wall-E vs some sort of utopia. In fact I believe that sort of setup is as close to a utopian society as I can imagine being realistic.
I did apartment maintenance for a place where about half the recipients had paid for rent, utilities, and bare necessities provided by the government. It was easy to play the odds and know which apartment was which the moment you stepped foot into one. It’s not a perfect correlation to what UBI would look like for many reasons, but it’s closer than the average upper middle class suburbanite imagines people will act like if given the opportunity.
What projects? You are starting from a completely different baseline than the average hypothetical UBI recipient.
I think UBI advocates may have a point once you're 2-3 generations into some sort of UBI system. But bootstrapping that system is not possible, most people will revert to do nothing of value to society, no projects, nothing.
I generally agree, but I think for some of the most interesting problems in computer science you need resources that only companies can provide and thats basically work.
After free UNIX and Linux became available on affordable home computers, I found it was no longer necessary to be at a company to do interesting projects. That was before 1995.
People can find other things to do than work for a wage. I don’t get what your original objection is about when you yourself work even though you don’t have to.
Some local volunteer organizations seem to only have people 60+ years of age.
I'm sure there are. Doesn't mean most people are like that. Consider retirees. Some find meaningful activities, many just rot away out of not having a purpose.
What percentage of people currently living off of welfare are doing meaningful work?
According to google: "Some reports indicate that 26.8% to 28.6% of households on welfare have earned income, which sometimes reflects a focus on households with no work-eligible adults (elderly, disabled)."
According to google, "approximately 57% to 67% of American adults are living paycheck to paycheck."
This doesn't mean they are poor. As their income goes up, so does their spending.
Also according to google, "Approximately 60% to 80% of professional athletes face severe financial distress or go broke within a few years of retirement, particularly in the NFL and NBA. Data suggests 78% of NFL players experience financial hardship within two years of retirement, while about 60% of NBA players are broke within five years."
and:
"though often debated, statistic suggests that up to 70% of lottery winners go broke or face financial distress within three to five years, more conservative estimates indicate about one-third (roughly 33%) declare bankruptcy."
Personally, I think that high schools should have a required course in finance and accounting.
Always such glowing recommendations of human kind from techies.
People devolve like that when they have no purpose or opportunities. Which I’m sure would happen with the real goal of UBI: barely subsistence support in order to grow a larger pool of reserve labor while the rich (who are not degenerate at all[1]) live large.
America offers a free education for all. People are free to move to anyplace in the country. Historically, people migrated to where the opportunities were. Americans are free to start a business any time.
There’s no way to test UBI without implementing it fully. Any experiment that gives people a no-strings-attached stipend isn’t accounting for the fact that the money has a negligible impact on the economy and produces no meaningful change in the workforce. Plus, all of these experiments are time-bound. Participants know the payments will stop.
I also get the feeling that such experiments just prove that giving people money makes them happier. But there’s nothing to account for the fact that prices in the market haven’t changed, the tax structure hasn’t changed, and no goods or services experienced any shortages.
> Every test of UBI so far shows that people continue to work.
I'm not aware of any realistic UBI tests. Could you point me to any?
The ones I'm aware of were either or both:
1. Time limited, so participants were aware that they needed to still have a job or at least be employable after the experiment has concluded.
2. Were funded externally, so participants only reaped the benefits of UBI but didn't incur the drawbacks (i.e. didn't have to fund the program by much higher income taxes) which could have discourage them from working.
It was basically a supplementary source of income - money for nothing for a limited time period, not an actual UBI program.
So you believe that the entire driving factor of the consumer goods market would mysteriously disappear if people had enough money to not worry about missing rent?
Rent is defined as unearned income attracted by a dominant market position. If we wanted people to afford rent it'd be more efficient to set rent to zero by fiat.
Hard working billionaires famous for succesdully working devolved into abuse island, real saltiness over anyone saying sexual harrasment is wrong and basically conspiracy to end democracy.
UBI guy playing games in moms basement comes accross as harmless in comparison.
UBI doesn’t mean people don’t work. It means work is partially decoupled from basic needs.
People would work for two reasons. One is to make extra money and afford a lifestyle beyond what UBI provides. The second is to… do things that are meaningful. If people derive meaning from work then that’s why they’ll work.
Some people will just sit around on UBI. Those are the same people who sit around today on welfare or dead end bullshit jobs that don’t really produce much value.
I’m not totally sold on UBI but there’s a lot of shallow bad arguments against it that are pretty easy to dismiss.
governments will collapse before we are at a moment where UBI is needed. Billionaires and companies hardly pay any tax and if white collar jobs die down, there is no guarantee that government will even have money to wipe their butt.
Nothing happened to them, they're still around; just consolidated into industrial operations.
The "twist" is they rot as e-waste every 18 months when newer models arrive, generating roughly 30,000 metric tonnes of eWaste annually[0] with no recycling programmes from manufacturers (like Bitmain)... which is comparable to the entire country of the Netherlands.
Turns out the decentralised currency for the people is also an environmental disaster built on planned obsolescence. Who knew.
If the power was used over the whole year (and not just one hour)
(2600 MWh / year) / (24 * 365 h/year) = 0.29 MWh = 296 kWh. Thats like hair dryer levels of power consumption (if the hair dryver was left on all the time)
Why are you even trying to argue energy consumption when the topic is eWaste due to bitcoin ASICs?
Even if we continue down this route, its something like 15% of global stock transactions going through NYSE, per transaction its extremely efficient when compared to Ethereum; but thats not the argument anyway- its that the hardware used for mining is barely useful outside of that use-case, and the shelf-life is very low to boot.
If there was a use-case, we’d have found it by now, since 30,000 Tonnes a year of it ends up in landfills, surely someone would dig it out or buy it if it had utility.
AI, obviously! A bubble doesn't mean demand vanishes overnight. There is - at current price points - much more demand than supply. That means the market can tolerate price hikes whilst keeping the accelerators busy. It seems likely that we're still just at the start of AI demand as most companies are still finding their feet with it, lots of devs still aren't using it at all, lots of business workflows that could be automated with it aren't and so on. So there is scope for raising prices a lot as the high value use cases float to the top, maybe even auctioning tokens.
Let's say tomorrow OpenAI and Anthropic have a huge down round, or whatever event people think would mark the end of the bubble. That doesn't mean suddenly nobody is using AI. It means they have to rapidly reduce burn e.g. not doing new model versions, laying off staff and reducing the comp of those that remain, hiking prices a lot, getting more serious about ads and other monetized features. They will still be selling plenty of inferencing.
In practice the action is mostly taking place out of public markets. We won't necessarily know what's happening at the most exposed companies until it's in the rear view mirror. Bubbles are a public markets phenomenon. See how "ride sharing"/taxi apps played out. Market dumping for long periods to buy market share, followed by a relatively easy transition to annual profitability without ever going public. Some investors probably got wiped along the way but we don't know who exactly or by how much.
Most likely outcome: AI bubble will deflate steadily rather than suddenly burst. Resources are diverted from training to inferencing, new features slow down, new models are weaker and more expensive than new models and the old models are turned off anyway. That sort of thing. People will call it enshittification but it'll really just be the end of aggressive dumping.
There may not be that much demand at a price that yields profit. Demand at current heavily subsidized “the first dose is always free” prices is not a great indicator unless they find some way to make themselves indispensable for a lot of tasks for a lot of people. So far, they haven’t.
Yes if/when prices rise there'll be demand destruction but I think demand will keep rising for the foreseeable future anyway even incorporating that. Lower value use cases like vibe coding hobby apps might fall by the wayside because they become uneconomic but the tokens will be soaked up by bigger enterprises that have found ways to properly integrate it at scale into their businesses. I don't mean Copilot style Office plugins but more business-specific stuff that yields competitive advantage.
You’re just repeating their predictions. Investors are starting to get nervous that there’s no real proof these things could justify burning a Mt. Everest sized pile of $100 bills to achieve.
Yes it's only a prediction based on what I'm seeing. And I'm not disagreeing with the investors that there's overinvestment right now. Prices need to rise, spending on R&D needs to fall for this stuff to make economic sense. I'm only arguing that there's plenty of demand, and assuming price rises happen smoothly over not too short of a period, any demand destruction at the lower levels will be quickly counter-balanced by demand creation at higher value-add levels.
It's also possible non-tech industries just have a collective imagination failure and can't find use cases for AI, but I doubt it.
I know there is demand — I even know a few people in high-level dev roles, one easily in the 99th percentile for pay, that were taken from their regular, important dev tasks to make agents for paying clients.
I’m not worried about the technology flourishing. I’m worried about my fucking retirement, know what I mean? The question isn’t whether there is demand, it’s how much demand there is, because they’re betting on having all the demand. I don’t have a ton of money. People getting this wrong in a way my financial advisor can’t outmaneuver is an existential threat to my ability to not live in crumbling public housing in a few decades.
We’re talking about this needing to meaningfully move towards making whole digit percentages of the US GDP, soon. Not only are these initiatives largely unprofitable, they’re increasing their expenses based on hopes and vibes. I think a whole lot of people are so focused on short-term gains and being king of the hill that sustainability is a distant afterthought — just like it was in the .com era. I have zero faith in the current cohort of tech leaders to get this right.
Anyone who regularly tries to rent GPUs on VPS providers knows that they often sell out. This isn't a market with lots of capacity nobody needs. In the dot.com bubble there was lots of dark fiber nobody was using. In this bubble, almost every high-end GPU is being used fully by someone.
We can use the GPUs for research (64-bit scientific compute), 3d graphics, a few other things. We programmers will reconfigure them to something useful.
At least, the GPUs that are currently plugged in. A lot of this bullshit bubble crap is because most of those GPUs (and RAM) is sitting unplugged in a warehouse, because we don't even have enough power to turn all of them on.
So if your question is how to use a GPU... I got plenty of useful non-AI related ideas. But only if we can plug them in.
I wouldn't be surprised if many of those GPUs are just e-waste, never to turn on due to lack of power.
> I wouldn't be surprised if many of those GPUs are just e-waste, never to turn on due to lack of power.
That's my fear.
The problem is these GPUs are specifically made for datacenters, So it's not like your average consumer is going to grab one to put into their gaming PCs.
I also worry about what the pop ends up doing to consumer electronics. We'll have a bunch of manufacturers that have a bunch of capacity that they can no longer use to create products which people want to buy and a huge backstop of second hand goods that these liquidated AI companies will want to unload. That will put chip manufactures in a place where they'll need to get their money primarily from consumers if they want to stay in business. That's not the business model that they've operated on up until this point.
We are looking at a situation where we have a bunch of oil derricks ready to pump, but shut off because it's too expensive to run the equipment making it not worth the energy.
> As it turns out Nvidia's H100, a card that costs over $30,000 performs worse than integrated GPUs in such benchmarks as 3DMark and Red Dead Redemption 2
I predict there's going to be a niche opening up for companies to recycle the expensive parts of all these compute hardware that AI companies are currently buying and will probably be obsolete/depreciated/replaced in the next 2-5 years. The easiest example is RAM chips. There will be people desoldering those ICs and putting them on DDR5 sticks to resell to the general consumer market.
A technological arms race just occurred in front of your eyes for the past 5 years and you think they're going to let the stockpile fall into civilian hands?
In 2 years the next generation chips will be released and th se chips will be obsolete.
That's truly e-waste. Now in practice, we programmers find uses of 10+ year old hardware as cheap webhosta, compiler/build boxes, Bamboo, unit tests, fuzzers and whatever. So as long as we can turn them on we programmers can and will find a use.
But because we are power constrained, when the more efficient 1.8nm or 1.5nm chips get released (and when those chips use 30% or less power), no one will give a shit about the obsolete stockpile.
In what sense? Not competitive for chat bot providers to use? Is that a metric that matters?
> when the more efficient 1.8nm or 1.5nm chips get released
What if they don't get released? You don't have a broad and competitive set of players providing products in this realm. How hard would it be to stop this?
> no one will give a shit about the obsolete stockpile.
You have lived your life with ready access to cutting edge resources. You ever wonder how long that trend could _possibly_ last?
As in: the 1.5nm or 1.8nm GPUs will use less power and therefore can actually be plugged in.
We are power constrained. The GPUs of this generation can't even be plugged in yet because of these power constraints.
When power is a problem, getting lower power GPUs in is a priority. The 1.8nm and 1.5nm next generation is already in production, and will likely launch before these massive GPU stockpiles are used.
And then what? Why plug in last generations crap when the next generation is shipping?
--------
Todays GPUs have to actually launch and be deployed while they are useful. Otherwise they could fully be obsolete and lose significant value.
I assume even really out of date cards and racks will readily find some use, when the present-day alternative costs ~$100k for a single card. Just have to run them on a low-enough basis that power use is not a significant portion of the overall cost of ownership.
It’ll be interesting to see what people come up with to get conventional scientific computing workloads to work on 16 bit or smaller data types. I think there’s some hope but it will require work.
> Wild speculation detached from reality which destroys personal fortunes are not "a desirable feature."
This is not the definition of a bubble, and is specifically contrary to what i said.
A good bubble, like the automobile industry in the example I linked, paves the way for a whole new economic modalit - but value was still destroyed when that bubble popped and the market corrected.
You may think its better to not have bubbles and limit the maximum economic rate of change (and you may be right), but the current system is not obviously wrong and has benefits.
The trouble is, you can only tell what was "detached from reality" after the fact. Real-world bubbles must be credible by definition, or else they would deflate smoothly rather than growing out of control and then popping suddenly when the original expectations are dashed by reality.
I run the numbers on hyperscaler AI capex and the math is not
going to work out.
With these assumptions:
– Big 4 keep spending at current pace for 3 more years
– Returns only start showing after aprox 2 years
– Heavy competition with around 20% operating margin on AI and Cloud
– Use of 9% cost of capital
This is the current reality:
AWS aprox $142B/yr
Azure aprox $132B/yr
Google Cloud around $71B/yr
Combined its about $330B to $340B annual cloud revenue today
And lets says Global public cloud market of $700B total today.
To justify the current capex trajectory under those assumptions, by year 3
the big hyperscalers would need roughly $800B to $900B in new annual revenue
just to earn a normal return on the capital being deployed.
That implies combined hyperscaler cloud and AI revenue going from:
$330B today to $1.2T within 3 years :-))
In other words...Cloud would need to roughly do 4× in a very short window,
and the incremental revenue alone would exceed the entire current global cloud market.
So for the investment wave to make financial sense, at least one of these must be true:
1 Cloud/AI spending globally explodes far beyond all prior forecasts
2 AI massively increases revenue/profit in ads, software, commerce and not just cloud
3 A winner takes all outcome where only 1 or 2 players earn real returns
4 Or a large share of this capex never earns an economic return and is defensive
People keep modeling this like normal cloud growth. But what we have is insanity
Azure revenue is growing at 39% year over year. If Microsoft can sustain this growth, in four years Azure will be ~3.73x its current size. This is of course very difficult, but you really don’t need a deus ex machina to hit 4x growth under your assumptions.
The issue in the late-90s was all the investment created a lot of real revenue for telecoms and other companies. Even though there were a lot of shenanigans with revenue, a lot of real money was spent on fiber and tech generally.
But the real money was investment that didn’t see a return for the investor. The investments needed to have higher final consumption (such as through better productivity or through displacing other costs) to pay back the investment.
The RAM shortage is extremely temporary. It’ll last as long as it takes for new capacity to come online. RAM shortages and price spikes have happened many times before.
Eventually China will catch up in EUV fabrication and flood the market with cheap silicon. When that happens a terabyte of RAM will cost what 128gb costs now.
Cloud gaming is crap and any actual gamer will tell you that. The niche of gamers casual enough to not care about playing over network latency but serious enough to pay real money for cloud gaming is microscopic.
Yes, but that majority doesn't need cloud gaming precisely because those games run just fine on their phone - there's no benefit in putting them in the cloud, that was supposed to be for fancy stuff where you need a beefy GPU for the eye candy.
Speed of light doesn't adhere to Moore's law :) and it's made worse by the fact most everyone connects via WiFi these days and it alone adds a few ms more.
I'm not surprised; you need a lot more servers and even so, there are a lot of places where something low ping times is difficult. While there is a lot of room for latency to go down, 1 lightmillisecond is ~300 km (~186 mi). This means that if a computer is 150 km away, 1 ms is the minimum ping allowed by physics, if I am talking directly to it.
By that yardstick, we've actually done very well in a lot of cases. :)
Even if gaming goes to the cloud, how are they going to run the massive existing library of video games on the dedicated AI inference hardware that everyone is buying right now? Seems like that pivot would require even more spending.
And how are they going to get sub-5ms round trip latency into the average consumer’s home to avoid people continuing to see cloud gaming as a janky gimmick that feels bad to use?
So, it seems like a "FREELANCE" or "OPEN TO FREELANCE" or etc. keyword, on the regular "who is hiring" and "who wants to be hired", would not be too difficult for people to use and understand.
Medium term, I think it would release a lot of resources (skilled workers, productive capacity, energy) to use on something more productive. But then, I kind of hoped for that after the GFC, also...
reply