Even very young children with very simple thought processes, almost no language capability, little long term planning, and minimal ability to form long-term memory actively deceive people. They will attack other children who take their toys and try to avoid blame through deception. It happens constantly.
Dogs too; dogs will happily pretend they haven't been fed/walked yet to try to get a double dip.
Whether or not LLMs are just "pattern matching" under the hood they're perfectly capable of role play, and sufficient empathy to imagine what their conversation partner is thinking and thus what needs to be said to stimulate a particular course of action.
> Maybe human brains are just pattern matching too.
I don't think there's much of a maybe to that point given where some neuroscience research seems to be going (or at least the parts I like reading as relating to free will being illusory).
My sense is that for some time, mainstream secular philosophy has been converging on a hard determinism viewpoint, though I see the wikipedia article doesn't really take stance on its popularity, only really laying out the arguments: https://en.wikipedia.org/wiki/Free_will#Hard_determinism
Are you trying to suppose that an LLM is more intelligent than a small child with simple thought processes, almost no language capability, little long-term planning, and minimal ability to form long-term memory? Even with all of those qualifiers, you'd still be wrong. The LLM is predicting what tokens come next, based on a bunch of math operations performed over a huge dataset. That, and only that. That may have more utility than a small child with [qualifiers], but it is not intelligence. There is no intent to deceive.
A small child's cognition is also "just" electrochemical signals propagating through neural tissue according to physical laws!
The "just" is doing all the lifting. You can reductively describe any information processing system in a way that makes it sound like it couldn't possibly produce the outputs it demonstrably produces. "The sun is just hydrogen atoms bumping into each other" is technically accurate and completely useless as an explanation of solar physics.
You are making a point that is in favor of my argument, not against it. I make the same argument as you do routinely against people trying to over-simplify things. LLM hypists frequently suggest that because brain activity is "just" electrochemical signals, there is no possible difference between an LLM and a human brain. This is, obviously, tremendously idiotic. I do believe it is within the realm of possibility to create machine intelligence; I don't believe in a magic soul or some other element that make humans inherently special. However, if you do not engage in overt reductionism, the mechanism by which these electrochemical signals are generated is completely and totally different from the signals involved in an LLM's processing. Human programming is substantially more complex, and it is fundamentally absurd to think that our biological programming can be reduced to conveniently be exactly equivalent to the latest fad technology and assume that we've solved the secret to programming a brain, despite the programs we've written performing exactly according to their programming and no greater.
Edit: Case in point, a mere 10 minutes later we got someone making that exact argument in a sibling comment to yours! Nature is beautiful.
Yes. I also don't think it is realistic to pretend you understand how frontier LLMs operate because you understand the basic principles of how the simple LLMs worked that weren't very good.
Its even more ridiculous than me pretending I understand how a rocket ship works because I know there is fuel in a tank and it gets lit on fire somehow and aimed with some fins on the rocket...
The frontier LLMs have the same overall architecture as earlier models. I absolutely understand how they operate. I have worked in a startup wherein we heavily finetuned Deepseek, among other smaller models, running on our own hardware. Both Deepseek's 671b model and a Mistral 7b model operate according to the exact same principles. There is no magic in the process, and there is zero reason to believe that Sonnet or Opus is on some impossible-to-understand architecture that is fundamentally alien to every other LLM's.
Deepseek and Mistral are both considerably behind Opus, and you could not make deepseek or mistral if I gave you a big gpu cluster. You have the weights but you have no idea how they work and you couldn't recreate them.
> I have worked in a startup wherein we heavily finetuned Deepseek, among other smaller models, running on our own hardware.
Are you serious with this? I could go make a lora in a few hours with a gui if I wanted to. That doesn't make me qualified to talk about top secret frontier ai model architecture.
Now you have moved on to the guy who painted his honda, swapped out some new rims, and put some lights under it. That person is not an automotive engineer.
I'm not talking about a lora, it would be nice if you could refrain from acting like a dipshit.
> and you could not make deepseek or mistral if I gave you a big gpu cluster. You have the weights but you have no idea how they work and you couldn't recreate them.
I personally couldn't, but the team behind that startup as a whole absolutely could. We did attempt training our own models from scratch and made some progress, but the compute cost was too high to seriously pursue. It's not because we were some super special rocket scientists, either. There is a massive body of literature published about LLM architecture already, and you can replicate the results by learning from it. You keep attempting to make this out to be literal fucking magic, but it's just a computer program. I guess it helps you cope with your own complete lack of understanding to pretend that it is magical in nature and can't be understood.
No, it's just obvious that there is a massive race going with trillions of dollars on the line. No one is going to reveal the details of how they are making these AIs. Any public information that exists about them is way behind SOTA.
I strongly suspect that it is really hard to get these models to converge though so I have no idea what your team could've theoretically made, but it certainly would've been well behind SOTA.
My point is if they are changing core elements of the architecture you would have no idea because they wouldn't be telling anyone about it. So thinking you know how Opus 4.6 works just isn't realistic until development slows down and more information comes out about them.
Short term memory is the context window, and it's a relatively short hop from the current state of affairs to here's an MCP server that gives you access to a big queryable scratch space where you can note anything down that you think might be important later, similar to how current-gen chatbots take multiple iterations to produce an answer; they're clearly not just token-producing right out of the gate, but rather are using an internal notepad to iteratively work on an answer for you.
Or maybe there's even a medium term scratchpad that is managed automatically, just fed all context as it occurs, and then a parallel process mulls over that content in the background, periodically presenting chunks of it to the foreground thought process when it seems like it could be relevant.
All I'm saying is there are good reasons not to consider current LLMs to be AGI, but "doesn't have long term memory" is not a significant barrier.
Intelligence is about acquiring and utilizing knowledge. Reasoning is about making sense of things. Words are concatenations of letters that form meaning. Inference is tightly coupled with meaning which is coupled with reasoning and thus, intelligence. People are paying for these monthly subscriptions to outsource reasoning, because it works. Half-assedly and with unnerving failure modes, but it works.
What you probably mean is that it is not a mind in the sense that it is not conscious. It won't cringe or be embarrassed like you do, it costs nothing for an LLM to be awkward, it doesn't feel weird, or get bored of you. Its curiosity is a mere autocomplete. But a child will feel all that, and learn all that and be a social animal.
Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence. Probabilistic prediction is inherently incompatible with deterministic deduction. We're years into being told AGI is here (for whatever squirmy value of AGI the hype huckster wants to shill), and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call.
Our computer programs execute logic, but cannot reason about it. Reasoning is the ability to dynamically consider constraints we've never seen before and then determine how those constraints would lead to a final conclusion. The rules of mathematics we follow are not programmed into our DNA; we learn them and follow them while our human-programming is actively running. But we can just as easily, at any point, make up new constraints and follow them to new conclusions. What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.
>Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4. This is deterministic, and it is why LLMs are not intelligent and can never be intelligent no matter how much better they get at superficially copying the form of output of intelligence.
This is not even wrong.
>Probabilistic prediction is inherently incompatible with deterministic deduction.
And his is just begging the question again.
Probabilistic prediction could very well be how we do deterministic deduction - e.g. about how strong the weights and how hot the probability path for those deduction steps are, so that it's followed every time, even if the overall process is probabilistic.
Personally I think not even wrong is the perfect description of this argumentation. Intelligence is extremely scientifically fraught. We have been doing intelligence research for over a century and to date we have very little to show for it (and a lot of it ended up being garbage race science anyway). Most attempts to provide a simple (and often any) definition or description of intelligence end up being “not even wrong”.
>Intelligence is the ability to reason about logic. If 1 + 1 is 2, and 1 + 2 is 3, then 1 + 3 must be 4.
Human Intelligence is clearly not logic based so I'm not sure why you have such a definition.
>and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call.
One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong.
>What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.
Good thing LLMs can handle this just fine I guess.
Your entire comment perfectly encapsulates why symbolic AI failed to go anywhere past the initial years. You have a class of people that really think they know how intelligence works, but build it that way and it fails completely.
> One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong.
They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286
> Good thing LLMs can handle this just fine I guess.
LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly. They can't even play Chess or Poker without breaking the rules despite those being extremely well-represented in the dataset already, nevermind a made-up set of logical rules.
>They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286
I thought we were talking about actual arithmetic not silly puzzles, and there are many human adults that would fail this, nevermind children.
>LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly.
Even if that were true (Have you actually tried?), You do realize many humans would also fail once you did all that right ?
>They can't even reliably play Chess or Poker without breaking the rules despite those extremely well-represented in the dataset already, nevermind a made-up set of logical rules.
LLMs can play chess just fine (99.8 % legal move rate, ~1800 Elo)
I still have not been convinced otherwise that LLMs are just super fancy (and expensive) curve fitting algorithms.
I don‘t like to throw the word intelligence around, but when we talk about intelligence we are usually talking about human behavior. And there is nothing human about being extremely good at curve fitting in multi parametric space.
Okay but chemical and electrical exchanges in an body with a drive to not die is so vastly different than a matrix multiplication routine on a flat plane of silicon
>Okay but chemical and electrical exchanges in an body with a drive to not die is so vastly different than a matrix multiplication routine on a flat plane of silicon
I see your "flat plane of silicon" and raise you "a mush of tissue, water, fat, and blood". The substrate being a "mere" dumb soul-less material doesn't say much.
And the idea is that what matters is the processing - not the material it happens on, or the particular way it is.
Air molecules hitting a wall and coming back to us at various intervals are also "vastly different" to a " matrix multiplication routine on a flat plane of silicon".
But a matrix multiplication can nonetheless replicate the air-molecules-hitting-wall audio effect of reverbation on 0s and 1s representing the audio. We can even hook the result to a movable membrane controlled by electricity (what pros call "a speaker") to hear it.
The inability to see that the point of the comparison is that an algorithmic modelling of a physical (or biological, same thing) process can still replicate, even if much simpler, some of its qualities in a different domain (0s and 1s in silicon and electric signals vs some material molecules interacting) is therefore annoying.
Intelligence does not require "chemical and electrical exchanges in an body". Are you attempting to axiomatically claim that only biological beings can be intelligent (in which case, that's not a useful definition for the purposes of this discussion)? If not, then that's a red herring.
There is an element of rudeness to completely ignoring what I've already written and saying "you know [basic principle that was already covered at length], right?". If you want to talk about contributing to the discussion rather than being rude, you could start by offering a reply to the points that are already made rather than making me repeat myself addressing the level 0 thought on the subject.
Repeating yourself doesn't make you right, just repetitive. Ignoring refutations you don't like doesn't make them wrong. Observing that something has already been refuted, in an effort to avoid further repetition, is not in itself inherently rude.
Any definition of intelligence that does not axiomatically say "is human" or "is biological" or similar is something a machine can meet, insofar as we're also just machines made out of biology. For any given X, "AI can't do X yet" is a statement with an expiration date on it, and I wouldn't bet on that expiration date being too far in the future. This is a problem.
It is, in particular, difficult at this point to construct a meaningful definition of intelligence that simultaneously includes all humans and excludes all AIs. Many motivated-reasoning / rationalization attempts to construct a definition that excludes the highest-end AIs often exclude some humans. (By "motivated-reasoning / rationalization", I mean that such attempts start by writing "and therefore AIs can't possibly be intelligent" at the bottom, and work backwards from there to faux-rationalize what they've already decided must be true.)
> Repeating yourself doesn't make you right, just repetitive.
Good thing I didn't make that claim!
> Ignoring refutations you don't like doesn't make them wrong.
They didn't make a refutation of my points. They asserted a basic principle that I agreed with, but assume acceptance of that principle leads to their preferred conclusion. They make this assumption without providing any reasoning whatsoever for why that principle would lead to that conclusion, whereas I already provided an entire paragraph of reasoning for why I believe the principle leads to a different conclusion. A refutation would have to start from there, refuting the points I actually made. Without that you cannot call it a refutation. It is just gainsaying.
> Any definition of intelligence that does not axiomatically say "is human" or "is biological" or similar is something a machine can meet, insofar as we're also just machines made out of biology.
And here we go AGAIN! I already agree with this point!!!!!!!!!!!!!!! Please, for the love of god, read the words I have written. I think machine intelligence is possible. We are in agreement. Being in agreement that machine intelligence is possible does not automatically lead to the conclusion that the programs that make up LLMs are machine intelligence, any more than a "Hello World" program is intelligence. This is indeed, very repetitive.
You have given no argument for why an LLM cannot be intelligent. Not even that current models are not; you seem to be claiming that they cannot be.
If you are prepared to accept that intelligence doesn't require biology, then what definition do you want to use that simultaneously excludes all high-end AI and includes all humans?
By way of example, the game of life uses very simple rules, and is Turing-complete. Thus, the game of life could run a (very slow) complete simulation of a brain. Similarly, so could the architecture of an LLM. There is no fundamental limitation there.
If you want to argue with that definition of intelligence, or argue that LLMs do meet that definition of intelligence, by all means, go ahead[1]! I would have been interested to discuss that. Instead I have to repeat myself over and over restating points I already made because people aren't even reading them.
> Not even that current models are not; you seem to be claiming that they cannot be.
As I have now stated something like three or four times in this thread, my position is that machine intelligence is possible but that LLMs are not an example of it. Perhaps you would know what position you were arguing against if you had fully read my arguments before responding.
[1] I won't be responding any further at this point, though, so you should probably not bother. My patience for people responding without reading has worn thin, and going so far as to assert I have not given an argument for the very first thing I made an argument for is quite enough for me to log off.
> Probabilistic prediction is inherently incompatible with deterministic deduction.
Human brains run on probabilistic processes. If you want to make a definition of intelligence that excludes humans, that's not going to be a very useful definition for the purposes of reasoning or discourse.
> What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.
Have you tried this particular test, on any recent LLM? Because they have no problem handling that, and much more complex problems than that. You're going to need a more sophisticated test if you want to distinguish humans and current AI.
I'm not suggesting that we have "solved" intelligence; I am suggesting that there is no inherent property of an LLM that makes them incapable of intelligence.
Are you ever concerned about the consequences of what you are making? No one really knows how this will play out and the odds of this leading to disaster are significant.
I just don't understand people working on improving ai. It just isn't worth the risk.
>I just don't understand people working on improving ai. It just isn't worth the risk.
A cynical/accelerationist perspective would be: it enables you to rake in huge amounts of money, so no matter what comes next, you will be set up to endure it better than most.
Of course, I think about this at least once a week maybe more often. I think that the technology overall will be a great net benefit to humanity or I wouldn't touch it.
I’m younger than most on this site. I see the next decades of my life being defined by a multi-generational dark age via a collapse in literacy (“you use a calculator right?”), median prosperity (the only truly functional distribution system we have figured out is labor), and loss of agency (kinda obvious). This outcome is now, as of 2026, essentially priced into the public markets and accepted as fact by most media outlets.
“It’s inevitable” is at least a hard point to argue with. “Well I’M so productive, I’m having the time of my life”, the dominant position in many online tech spaces, seems short-sighted at best.
I miss being a techno optimist, it’s much more fun. But it’s increasingly hard.
I really think the doom consensus is largely an online phenomena. We're in a tense period like the early 80s, and that would be true without AI in the mix, but I think its a matter of perspective. We're certainly still way ahead of the 1910s and the 1940s for instance (it's on us btw to make sure we don't fall to that in time).
Every generation has its strains and the internet just amplifies it because outrage is currency. Those strains are things you only start to notice as you start to get older so they seem novel when in reality in the scheme of humanity is basically standard.
Fwiw if the market actually priced it in it would be in freefall since the market would be shortly irrelevant. We are due for a correction soon though.
Internet discourse is a facsimile of real life and often not how real life operates in my experience.
So I see all the discourse around extremes on either end and based on lived experience and working in the field think theres a much neater middle ground we'll ultimately arrive at thanks to people working very hard to land the plane so to speak.
I answered the more important question of a seemingly lost youngin and how to deal with the stress of inheriting a world in a bit of turmoil.
That said, trivially we already see it advancing math and science research as an assistive tool, development and more. Extrapolate it out a few more generations and it helps us unlock a whole bunch of things on the skill tree of life so to speak.
Yes, doomerism is a symptom of severe doomscrolling addiction. All the people who talk like this spend all day on X. They sound like delusional drug addicts TBH.
The only thing seriously reducing trust in elections is anti-democratic politicians who will ALWAYS find a convenient reason to claim the election is rigged, and many of their followers will believe and propagate that lie to create distrust in the election.
There is really nothing we can do to satisfy these people except create some kind of structure they demand which will somehow be made to heavily lean in their favor. That is what will satisfy them. Nothing else will.
idk, If I was in control of a country in the EU I would realize, unfortunately for pretty much everyone on the planet, that we have made a drastic miscalculation by relying on the US so heavily for defense.
However, that is not something that can be reversed meaningfully in less than a decade. So for now, I would play the long game like Germany while working to get the EU to build up a military force large enough to significantly reduce our dependence on the US.
It's not as if the US hasn't repeatedly requested that European nations invest in their defense for the past few decades.
Looking at it dispassionately as a European living in the US, if you wanted to foment the sort of mistrust many Americans have of Europe, I don't think you could have created a more invidious policy.
Even though European defence investment was lacklustre - don't forget that those requests between the lines mean to buy US defence tech and still be dependent on US in time of war.
Countries that have actually invested have same problems - dependance on US tech and their unreliable leadership. Those who had stockpiles of American weapons (or even components from US in mostly domestically made weapons) - still need to coordinate with US (cannot find in the moment, but I definitely read about this, when Sweden couldn't send weapons due to American components inside).
France is mostly (totally??) independent in the matter of defence from America - and Americans hate French for that. America really hated de Gaulle's wish of military and political independence of Europe from America. But he was unsuccessful in his vision, essentially building this status quo: "Americans will military bases in European backyards, Europeans will be tame good boys and Americans will provide security with a pinky promise", Truman Doctrine - I believe.
(West) Germany's extreme pacifism is also thanks to USAs efforts to not repeat Versaille treaty's failures and rise of new Hitler-like figure.
> if you wanted to foment the sort of mistrust many Americans have of Europe, I don't think you could have created a more invidious policy
Sounds like something from Project 2025 propaganda preparations.
I will remind you that only USA triggered NATO Article 5 and whole Europe came to help in their now infamous "war on terror", even including countries that weren't in NATO at the time (though obviously were aligned and wanted to be there) and lost lives there.
I would maybe have believe this statement if current administration would have gone 110% into isolationism, as their election shouts where "America First". At the time it was phrased as: they won't help Ukraine, NATO, or any other organisation/action happening outside USA. Now it means: USA will take anything by force whether you like it or not.
Also you want to eat your cake and have it too. You still want to have tens of thousands of soldiers and your bases in EU, you want EU countries to invest in your defence sector (but pwease pwease don't get too independent, otherwise Uncle Sam will get angwy), though you want to freaking go to war against NATO countries, because Amerika stronk. Also not forget very close cooperation and access given to local military bases for Americans from European counterparts.
Many NATO countries in Europe are steadily investing in defence for 10+ years (mostly from 2014 Crimea annexation) and many more waking up with 2022 total war on Ukraine from ruzzia.
I want European part of NATO to be stronger and more decisive, actions are happening, but Europe still has democracy, not a some weird authoritarian kakistocracy with oligarchical flavour.
So let's not pretend that Europe should pay for USA's wish for total hegemony, worldwide policing and global reserve currency. Europeans lost their lives in USAs wars and enabled this USA vision of global hegemony for last 70+ years.
These rambles prove to me yet again - in what information bubble USA lives, which is dictated by geriatric 80-year-olds still living 20+ years in the past inside their heads and transferred by ignorant talking heads of 24h news cycle.
It can be reversed in a year. In 1941 the US increased its production of tanks by 7x. In 1942 it increased production again by 4x. This idea that building industry takes decades needs to die a painful death.
There's a certain large European country with plenty of resources that is pretty famous for scaling its tank production just a couple years before the US did.
It is a real problem that AI's will basically confirm that most inquiries are true. Just by asking a leading question often results in the AI confirming it is true or stretching reality to accommodate the answer being true.
If I ask if a drug has a specific side effect and the answer is no it should say no. Not try to find a way to say yes that isn't really backed by evidence.
People don't realize that when they ask a leading question that is really specific in a way where no one has a real answer then the AI will try to find a way to agree, and this is going to destroy people's lives. Honestly it already has.
> Almost all games these days are basically like a work in progress, so if you pirate them then the game doesn't stay up to date.
Which, as a mod author and consumer, isn't always a bad thing. More than once, I had to drop just enjoying a game, to patch my published mods because some update that is automatically pushed out, and people have to accept in order to even boot a single-player game. Why? I don't know, but it's really annoying sometimes.
Besides, nowadays cracking groups release smaller patches too, so while you might not get the update the same hour it was published on Steam, usually within a week or two the same group that uploaded the original release, has released another patch.
When you start a subscription, you're agreeing to pay X amount every Y period of time; you're not starting a new agreement every single Y period of time.
They can cancel the prior tier or bump up the price on renewal though. This is the problem with subscriptions, you become complacent and accept incremental changes until you finally notice that you’re being rinsed.
And actually some subscriptions can include unilateral price increases in the contract (a subscription is a contract) with early termination fees. It just isn’t commonly done because word gets around and you will lose business. You typically only see this in predatory industries where there are few alternatives and the service is necessary, like local waste management.
If the contract is unfair enough you can usually escape it in court or arbitration, but nobody wants to go through that.
No, that doesn't make sense at all. You've paid for consistent terms for that Y period of time. Not cancelling the subscription when it's up for renewal is an implicit agreement to any new terms. And I'm sure if you'd read those terms in the first place, you'd come to the same understanding.
(And it's not even that: the X you're charged is subject to change upon renewal!)
I'm not arguing that this is a good or bad thing, just pointing out the reality of every single subscription agreement I've signed up for online.
They can cancel the subscription if you don't agree to the new proposition after they fulfilled their contract. But they can't just change the terms of the agreement after it was made.
But doing so would mean risking to loose customers who were just too lazy to cancel. So most Businesses don't like it. (Spotify did cancel their old contracts though, for people who had not agreed with the recent price hike)
I think your question is reasonable, but no, I do not think a company gets to promote a service as having no ads as part of the sell, and then put ads in by default.
Not the person you're replying to, but it just feels like rent-seeking. Amazon is already a gigantic corporation, pretty much everyone spends lots and lots of money on Amazon, it just felt like a way to try and squeeze more money out of their existing customers.
ETA:
I mean, I'm sure there is some exception to this, but generally speaking everyone hates ads. Part of the reason that the whole "cable cutting" thing happened was because everyone hated paying a lot of money to some cable company just to be bombarded with advertisements. At least that's a big reason as to why I did it.
Now all these media companies realized that they can start shoving ads at us again and people will keep paying.
Obviously I'm not entitled to having media at a specific price indefinitely, but I'm perfectly allowed to not like it when companies engage in rent-seeking bullshit.
It wouldn't bother me as much if you could still buy media, but as far as I can tell most TV shows don't get Blu-ray releases anymore. The media companies realized that it's more profitable for them to make you pay for the same media forever instead of a lump cost, I guess preferably with you watching corporate brainwashing to buy products.
I suspect once the heat on this settles down, every streaming service is going to start forcing ads on us at all times, and then the only way to fight back on this will be bittorrent.
Or just stop watching. I seem to be out of tune with what people want in a TV show nowadays, I don't find much enjoyable. I accept there was never that much, but given how much content is produced now I would have expected more in my sweet spot.
I agree with you, though I would say what is happening here is more like strip mining vs cutting dead wood.
I don't know that it should be legal to buy a company and then pay for it by loading up the company with debt obligations. It seems like a form of value destruction in order to enrich a bunch of vultures.
Fundamentally it is basically saying maybe we could buy this company and then plunder it with some % chance that it will still stay afloat and keep generating profit after they gut the company to try to service a debt that should not be attached to the company at all and provided no value to anyone but the vultures.
I'm not sure, but this seems like a form of anti-social behavior that destroys value for everyone except the people plundering the company. It is almost like piracy and we should honestly try to figure out a way to not allow large companies to be destroyed in this manner.
We just shouldn't let people buy profitable companies because they think they can make a return by destroying the business and then bleeding out a small profit once the company had been gutted. It isn't good for the economy, the employees, or really anyone except the plunderers.
Companies that go out of business hurt more than the owners - they hurt the employees, the community, the state (which has to care for the employees let go), etc.
That is unfortunate, but it is good for society to have rapid turnover of unprofitable businesses. The employees will be fine and get new jobs. When one company goes under, they will go to another. You don't work for a company, you work for an industry, and unless the layoff is due to industry wide issues, you will be fine.
It's bad for society to have rapid turnover full stop. It's disruptive and stressful to the humans involved and can be disastrous for the environment (if a bankrupt company just leaves a bunch of waste behind or already did and can't be sued to cover the cleanup), disastrous for the rest of the economy, local or larger (both their customers and their suppliers are affected), and causes a huge amount of wasted time and resources that should be avoided where possible.
We've learned that businesses are lazy, cheap, and untrustworthy, and will lie, steal, cheat, and abuse everything unless you write strong rules and enforce them regularly. It's in society's best interests to incentivize running good businesses, not creating messes and declaring bankruptcy.
The last damn thing I ever want is some centrally planned hell with some worthless bureaucrat telling me how to run my business when he has no idea how. This is a competition. Sink or swim. And if you can't swim you should be out of the game.
Some problems are much easier to solve than others. The problems you are bringing up are far more intractable and far harder and more expensive to solve.
What about Apple there? Bringing golden offerings to their god-king and so supporting the further corruption of the regime. One of the few with the power/money to stand against them instead kneeling before Trump like a teen beauty pageant hopeful.
as a former MSFT employee (who quit for reasons, well before the layoffs) I am not permitted to disparage or portray my former employer in a negative light.
I'm just mentioning this for no reason whatsoever. It popped into my head, for some reason.
LLMs are certainly capable of this.
reply