A lot of viruses insert themselves into your DNA, they may mess up the 3D structure, or during DNA repair result in misrepair / duplications, or simply insert somewhere and break something important. All of these are ways that can contribute to kickstarting or accelerating cancerous growth.
I've really seen both I suppose. A lot of devs don't take accountability / responsibility for their code, especially if they haven't done anything that actually got shipped and used, or in general haven't done much responsible adulting.
Look up Jevons Paradox, when something becomes more efficient, consumption can goes up, often due to price elasticity.
Think of like this: Imagine car prices go from $200,000 to $$20,000 - you wouldn't sell 10x the amount of cars, you'd sell --- In fact I just looked up the numbers - worldwide only 100K or so cars are 200K & higher, whereas roughly 80 million cars are in that affordable category.
So a price drop of 90% allowed sales to go from 0.1M to 80M!! I think this means we need more engines, tires, roads, gas, spare parts.
Anthropologists measure how civilized a tribe or society was by looking if they took care of the elderly, and what the child survival rates were. USA leads to developed world in child poverty, child homelessness, and highest rate of child death due to violence. Conservatives often bring up the statistic by race. It turns out bringing people over as slaves, and after freedom, refusing to provide land, education, fair access to voting rights, or to housing (by redlining etc.) - all policies advocated by conservatives of time past, was not the smartest thing to do. Our failure as a civilized society began and is in large part a consequence of the original sin of the USA.
[ This comment I'm making is USA centric. ]. I agree with the idea of making our society better and more equitable - reducing homelessness, hunger, poverty, especially for our children. However, I think redirecting this to AI datacenter spending is a red-herring, here's why I think this: As a society we give a significant portion of our surplus to government. We then vote on what the government should spend this on. AI datacenter spending is massive, but if you add it all up, it doesn't cover half of a years worth of government spending. We need to change our politics to redirect taxation and spending to achieve a better society. Having a private healthcare system that spends twice the amount for the poorest results in the developed world is a policy choice. Spending more than the rest of the world combined on the military is a policy choice. Not increasing minimum wage so at least everyone with a full time job can afford a home is a policy job (google "working homelessness). VC is a teeny tiny part of the economy. All of tech is only about 6% of the global economy.
You can increase min wage all you want, if there aren't enough homes in an area for everyone who works full time in that area to have one, you will still have folks who work full time who don't have one. In fact, increasing min wage too much will exacerbate the problem by making it more expensive to build more (and maintain those that exist). Though at some point, it will fix the problem too, because everyone will move and then there will be plenty of homes for anyone who wants one.
I agree with you 100%! Any additional surplus will be extracted as rents, when housing is restricted. I am for passing laws that make it much easier for people to obtain permits to build housing where there is demand. Too much of residential zoning is single-family housing. Texas does a better job at not restricting housing than California, for example. Many towns vote blue, talk to talk, but do not walk the walk.
> AI datacenter spending is massive, but if you add it all up, it doesn't cover half of a years worth of government spending.
I didn't check your math here, but if that's true, AI datacenter spending is a few orders of magnitude larger than I assumed. "massive" doesn't even begin to describe it
The US federal budget in 2024 had outlays of 6.8 trillion dollars [1].
nVidia's current market cap (nearly all AI investment) is currently 4.4 trillion dollars [2][3].
While that's hardly an exact or exhaustive accounting of AI spending, I believe it does demonstrate that AI investment is clearly in the same order of magnitude as government spending, and it wouldn't surprise me if it's actually surpassed government spending for a full year, let alone half of one.
Global datacenter spending across all categories (ML + everything else) is roughly 0.9 - 1.2 trillion dollars for the last three years combined, I was initially going to go for "quarter of the federal budget", but picked something I thought was more conservative to account for announced spending and 2025 etc. I pick 2022 onward for the LLM wave. In reality, solely ML driven, actual realized-to-date spending is probably about 5% of the federal budget. The big announcements will spread out over the next several years in build-out. Nonetheless, it's large enough to drive GDP growth a meaningful amount. Not large enough that redirecting it elsewhere will solve our societal problems.
>We need to change our politics to redirect taxation and spending to achieve a better society.
Unfortunately, I'm not sure there's much on the pie chart to redirect percentage wise. About 60% goes to non-discretionary programs like Social Security and Medicaid, and 13% is interest expense. While "non-discretionary" programs can potentially be cut, doing so is politically toxic and arguably counter to the goal of a better society.
Of the remaining discretionary portion half is programs like veterans benefits, transportation, education, income security and health (in order of size), and half military.
FY2025 spending in total was 3% over FY2024, with interest expense, social security and medicare having made up most of the increase ($249 billion)[1], and likely will for the foreseeable future[2] in part due to how many baby boomers are entering retirement years.
Assuming you cut military spending in half you'd free up only about 6% of federal spending. Moving the needle more than this requires either cutting programs and benefits, improving efficiency of existing spend (like for healthcare) or raising more revenue via taxes or inflation. All of this is potentially possible, but the path of least resistance is probably inflation.
I think the biggest lever is completely overhauling healthcare. The USA is very inefficient, and for subpar outcomes. In practice, the federal government already pays for the neediest of patients - the elderly, the at-risk children, the poor, and veterans. Whereas insurance rakes in profits from the healthiest working age people. Given aging, and the impossibility of growing faster than the GDP forever, we'll have to deal with this sooner or later. Drug spending, often the boogeyman, is less than 7% of the overall healthcare budget.
There is massive waste in our military spending due to the pork-barrel nature of many contracts. That'd be second big bucket I'd reform.
I think you're also right that inflation will ultimately take care of the budget deficit. The trick is to avoid hyperinflation and punitive interest rates that usually come along for the ride.
I would also encourage migration of highly skilled workers to help pay for an aging population of boomers. Let's increase our taxpayer base!
I am for higher rates of taxation on capital gains over $1.5M or so, that'll also help avoid a stock market bubble to some extent. One can close various loopholes while at it.
I am mostly arguing for policy changes to redistribute more equitably. I would make the "charity" status of college commensurate with the amount of financial aid given to students and the absolute cost of tuition for example., for example. I am against student loan forgiveness for various reasons - it's out of topic for this thread but happy to expand if interested.
If you want a chance for real creativity, flexibility and you have a decent gpu go local. Check out comfyui, download models and play around. The mainstream services have zero knobs to play around with, local is infinite.
The Chinese Room experiment applies equally well to our own brains - in which neuron does the "thinking" reside exactly? Searle's argument has been successfully argued against in many different ways. At the end of the day - you're either a closet dualist like Searle, or if you have a more scientific view and are a physicalist (i.e. brains are made of atoms etc. and brains are sufficient for consciousness / minds) you are in the same situation as the Chinese Room: things broken down into tissues, neurons, molecules, atoms. Which atom knows Chinese?
The whole point of this experiment was to show that if we don't know whether something is a mind, we shouldn't assume it is and that our intuition in this regard is weak.
I know I am a mind inside a body, but I'm not sure about anyone else. The easiest explanation is that most of the people are like that as well, considering we're the same species and I'm not special. You'll have to take my word on that, as my only proof for this is that I refuse to be seen as anything else.
In any case LLMs most likely are not minds due to the simple fact that most of their internal state is static. What looks like thoughtful replies is just the statistically most likely combination of words looking like language based on a function with a huge number of parameters. There's no way for this construct to grow as well as to wither - something we know minds definitely do. All they know is a sequence of symbols they've received and how that maps to an output. It cannot develop itself in any way and is taught using a wholly separate process.
I am arguing against Searle's Chinese Room argument, I am not positing that LLMs are minds. I am specifically refuting that your brain and the Chinese room can be both subject to the same reductionist argument Searle uses - if we accept, as you say, that you are a mind inside a body, which neuron, or atom does this mind reside in? My point is, if you accept Searle's argument, you have to accept it for brains, including your brain, as well.
Now, separately, you are precisely the type of closet dualist I speak of. You say that you are a mind inside a body, but you have no way of knowing that others have minds -- take this to it's full conclusion: You have no way of knowing that you have a "mind" either. You feel like you do, as a biological assembly (which is what you are). Either way you believe in some sort of body-mind dualism, without realizing. Minds are not inside of bodies. What you call a mind is a potential emergent phenomenon of a brain. (potential - because brains get injured etc.).
> In any case LLMs most likely are not minds due to the simple fact that most of their internal state is static.
This is not a compelling argument. Firstly, you can add external state to LLMs via RAG and vector databases, or various other types of external memory, and their internal state is no longer static and deterministic (and they become Turing complete!).
Second if you could rewind time, then your argument suggests that all other humans would not have minds because you could access the same state of mind at that point in time (it's static). Why would you travelling through time suddenly erases all other minds in reality?
The obvious answer is that it doesn't, those minds exist as time moves forward and then they reset when you travel backwards, and the same would apply to LLMs if they have minds, eg. they are active minds while they are processing a prompt.
> and their internal state is no longer static and deterministic (and they become Turing complete!).
But it's not the LLM that makes modifications in those databases - it just retrieves data which is already there.
> Why would you travelling through time suddenly erases all other minds in reality?
I'm not following you here.
> they are active minds while they are processing a prompt.
Problem is that this process doesn't affect the LLM in the slightest. It just regurgitates what it's been taught. An active mind is makes itself. It's curious, it gets bored, it's learning constantly. LLMs do none of that.
You couldn't get a real mind to answer the same question hundreds of times without it being changed by that experience.
> But it's not the LLM that makes modifications in those databases - it just retrieves data which is already there.
So what?
> I'm not following you here.
If you're time travelling, you're resetting the state of the world to some previous well-defined, static state. An LLM also starts from some well-defined static state. You claim this static configuration means there's no mind, so this entails that the ability to time travel means that every person who is not time travelling has no mind.
> Problem is that this process doesn't affect the LLM in the slightest. It just regurgitates what it's been taught. An active mind is makes itself.
People who are incapable forming new memories thus don't have minds?
Growing hazelnuts is labor-intensive, back-breaking work in Turkey's northern (Black Sea) coast. It's also a key part of the rural economy of the region.
reply