If AGI occurs some form of communism will be necessary no? How else will they cover all the costs of UBI?
It's our work/earths resources/internet its been born from, it should benefit us all.
I'm assuming there won't be more meaningful work for most of population to do that AI can't do. Some people think the opposite. That seems to be the main point of contention.
Just don't click them, this and tons of other services wouldn't exist without revenue streams...
EDIT: Based on the tsunami of responses, perhaps a hybrid offering with a paid ad-free version? Even then they would only be building a single product so directional conflict would still arise..
Most forms of advertisement should be considered criminal, as most modern ads are borderline psychological warfare against a population that doesn't even understand they're at war and losing because the effects aren't immediately noticeable and are very rarely directly physical.
Tear down someone mentally until you can get them to agree to part with their money. Call them ugly, call them fat, call them depressed. Show them how boring and miserable their life is before <product> is a part of their life. But only ever indirectly - if you're too direct the negative emotions they're feeling will be associated with your product instead of themselves. Tease them with beautiful people having fun and enjoying life. This could be you if you buy <product>. Happy and successful. Surrounded by friends laughing and smiling. Remember - ending on a happy emotion makes people associate those feelings with <product> which will increase sales of <product>. Cute polar bears. Drink coke.
It's a form of assault and I refuse to pretend otherwise.
There are very few forms of advertisement that I don't have a major problem with. Public space bulletin boards, word of mouth (non-sponsored), dedicated infomercial spaces (no videomercials w/ the comedy-like over the top failing at life to try and sell the product).
Price, product/service, why you need it and why yours over any competitors. Non-targeted ads by default unless the user opts in for targeted ads.
Mom & pop shops are totally capable of emotion-targeted advertising and it's a problem when they do it too. Corporations just use it more.
For example - how does one advertise perfume over television? A product that requires you to smell it? Emotional manipulation and promise of fantasy. Nothing to do with perfume. A proper commercial would at least try to explain the smell - maybe mention the high/low note fragrances used. Nope. Beautiful models. Lavish party. Brand name.
Fixing advertising will never happen. Advertising runs the world because it already won the war.
so you believe if your teacher or parent tell you not to over eat sugars, not to drop out of school, take care of looks, because these things will prevent you from being rich, relationship, comfort.
you believe this type of messaging shouldnt be shown because we are too mentally weak to handle it? you dont believe parents should parent their child either? you think anything that can possibly make a human form an opinion is inherently evil? do you think a company that lets say shows how boring your life is so they try to sell you a book is wrong.
or a workout machine shouldnt show what it can potentially offer to your life. or basically extending your life. a school that sells prestige and highest level of education should instead never advertise so you dont feel dumb?
im not saying this is the ideal utopia. this is reality. for businesses to work they need money, for a country to prosper it needs successful businesses whether it be govt or otherwise. you want to teach kids to be able to handle reality not play victim. ofc this is just my way of seeing things. but i believe being able to use what is being offered to your advantage is what makes successful people. and ill be damned if someone in the states believes they dont have all the opportunities in the world with the most access to whatever they want with govt regulating the things you are so afraid of to at least a reasonable level. being able to identify the evil in everything thus shutting themselves off is counter productive imo and its honestly even a blessing to be able to think like this lol. many countries this cant even be a factor because these companies cant even exsist to give you these evil messages. because they dont survive in those small economies
There are literally hundreds if not thousands of studies about precisely how to navigate people in aggregate and take advantage of every little bit of human psychology to maximize profits. It's not about people being mentally weak but about corporations and marketers knowing how to best break past people's mental barriers.
You are not unique among the millions of people. Advertising works - and it also works on people who adamantly believe that it doesn't work on them. Often because people think of themselves are more intelligent than the average person.
Almost nobody claims to like advertising. They might prefer advertising over subscriptions as a form of payment - but not because they like ads but because it doesn't take money from them directly but rather indirectly. Yet despite the almost universal hatred of advertisements its the worlds largest business.
Advertising would not be in the top 10 of worlds largest businesses if it didn't work on hundreds of millions of people. It bears repeating. You are not special. Neither am I. Despite my best attempts at avoiding advertising I can nearly guarantee it affects my purchasing decisions perhaps without my awareness of it at all. Subconsciously there like a parasite. Because that's how advertising actually works.
Nobody sees an ad and goes "I want <ad product>". That's not how advertising actually works but it's how people think it works. 3 months down the line you're buying beer for a party and buy a pack of Heineken without thinking too much about it. And that is when they have won.
Seeing ads can still affect you psychologically even if you don't click them.
Also lots of ads prey on people with worse impulse control who bankroll the rest of us who don't click ads. Similar to how casinos are bankrolled by the addicts at the slot machines or many games are bankrolled by the addicts spending all their savings on in game items.
Doesn't make me feel warm and fuzzy.
Plus there's something just aesthetically pleasing about an ad-free experience. I started paying for youtube premium to avoid ads and I must say its a much nicer experience.
> Also lots of ads prey on people with worse impulse control who bankroll the rest of us who don't click ads.
This reminds me of the Mark Twain adage of "Telling a man he can't have steak just because a baby can't chew it."
I don't want to pay money & subscriptions to every site I visit because some folks don't have impulse control. Similarly, the prevalence of alcoholism in society shouldn't prevent me from having a glass of wine with dinner.
> I don't want to pay money & subscriptions to every site I visit because some folks don't have impulse control.
You've got it exactly backwards. The reason you don't have to pay subscriptions is because of people with poor impulse control. If ads were less effective (e.g. the low impulse control people didn't exit), more sites would require subscriptions because the ad inventory would not be able to cover costs.
> I don't want to pay money & subscriptions to every site I visit because some folks don't have impulse control.
I have bad news for you: the absolutely infinite capacity for greed and the subsequent enshittification means that you're going to pay a subscription fee and still get to have your brain pickled by ad-based propaganda, just like cable TV.
Before ads, the service has one clear goal - build the best product they can for their users.
After ads, the goal is less clear. They still need to please users, but they also have to please advertisers. The needs of users and advertisers aren't always going to be aligned, and so users should lose trust in the Perplexity results.
If I was an investor, this would make me nervous. Make something that is far better than your competitors and users will pay. If you make something that is only marginally better than your competitors, users are only going to pay at most, a marginal fee. Perplexity is signaling that their product is mediocre.
I have a 1 year subscription to the pro plan that I got for free. Unless it gets way better, I won't pay for the next year.
I do pay for Claude and think it's easily worth the $20 / month.
> the service has one clear goal - build the best product they can for their users.
I don't believe this in the case of anything funded with big VC money. But let's say that Perplexity is trying to build the best product they can. They are scraping content and selling (or giving it) to their users, but at whose expense? Users get a convenient search engine and content makers get their work scraped. But now Perpelixity will let content makers pay them money so they can get traffic back to their site. This is kind of just the internet services 30 year timeline on a speedrun.
1. Fundamentally propaganda with little real regulatory oversight. Numerous arguments to this point and the negative impact of advertising have been made in the last several decades.
2. Tech companies seem to eventually get into the business of selling data and/or manipulating the user experience to better suit advertisement. I can’t think of a single company that has adopted advertising and not scaled it over time.
3. Security and privacy concerns inherent with letting third parties manipulate page content.
The presence of ads always degrades whatever they're attached to and are a visible indicator that you're being tracked.
But, even worse than that, when a company becomes dependent on ad revenue, then that company will always, sooner or later, start prioritizing the interests of ad companies over those of their users.
These are the reasons why I shy away from ad-supported products and services if at all possible. I prefer to use products and services that are optimized for users rather than advertisers.
I would allow ads if I could absolutely be assured that malware won't be served to me through them. I'm not talking about something that requires clicking on the ad themselves, because I never click on ads. I'm talking about malicious code being executed as soon as the ad is served. I know that Google is doing everything they can to try and prevent this but I don't trust that this is a solved problem.
Every time ads are allowed in, the quality degrades — products and services can be optimised for solving problems, or for ongoing revenue, but not both.
The latter comes at the expense of the former, it doesn't enable the former.
Lots of websites are basically unusable without an adblocker, being mostly advertising and hardly any content; it's not quite that bad with YouTube yet, but getting there.
Simple response. Never have I experienced a "thing" (web site, app, entertainment media, etc..) that did not have ads and thought, "You know what, this would be better with ads". Additionally, never have I experienced a "thing" that has ads and thought, "These ads are really making the experience better."
Ads ruin everything they touch. They make every experience worse. Anything + ads is a worse experience for everyone than that thing without ads.
There's almost nothing unique about HN as a tech news site these days. Sure you occasionally get a deep SME on the occasional deeply technical article, but the comment gravity on this website at this point is largely centered around tech-adjacent topics like this (business practices, regulations, legal action, social implications of tech.)
At this point the only thing that makes HN different from another subreddit or X or Bluesky is that the userbase values privacy highly, hates advertising, has an affinity for open software, and some other largely cultural values. If you're still using HN as a generic "high discussion quality tech news site" I think it's time to change that expectation. If you want to ask a site whose culture has evolved to hate advertising why they hate advertising, it's sort of like going to a watermelon-haters club and asking them why they hate watermelons so much.
Successive new major leaps in technology (concurrent with a dying culture of consumer protection) lead to qualitatively worse advertising experiences.
We're going from skippable ads (cable/DVR) to unskippable ads with surveillance (streaming) to algorithmic output/content that can be influenced by advertising with no transparency or disclosure.
For ads to be effective, they need to be targeted. Any ad-supported model incentivizes identifying and tracking users across as many services as possible and data mining to build profiles.
People don't want to understand that someone has to pay for all that bandwidth and free compute.
I'm using Kagi for that reason, just like I'm using Fastmail. I give someone money, they give me a service and support if needed. Seems fair and simple.
When the internet started it was weird to pay for something ephemeral like certain bits being delivered to you. But totally normal to pay for magazines. I think that early mindset just continued and became the new default. Electronic media just feels weird to pay for for people.
> Electronic media just feels weird to pay for for people.
Most people in the target audience of Perplexity probably pay for at least two streaming services (Spotify, Netflix etc.), so I don't think it's about "eletronic media" but more that search seems like a simple thing from the outside that always has been free and has a very strong player that offers a pretty good service for most people.
It's really hard to keep good employees working at these places for a long duration. Innovation is something your customers do. Your job is to make it as boring as fucking possible.
I could only stand it for 3 years before I had to quit (Samsung). I know others who still enjoy the experience though.
Three things that get me about current AI discourse:
- The public focus on AGI is almost a distraction. By the time we get to AGI highly-specialised models will have taken jobs from huge swaths of the population, SWE and CS are already in play.
- That AI will need to carry out every task a role does to replace it. I see this a lot on HN. What if SWEs get 50% more efficient and they fire half? That's still a gigantic economic impact. Even at the current state of the art this is plausible.
- The notion that everyone laid off above will find new employment from the opportunities AI creates. Perhaps it's just a gap in my knowledge. What opportunities are so large they'll make up for the economies we're starting to see? I understand the inverting population pyramid in the Western world helps this some also (more retirees/less workers).
> What if SWEs get 50% more efficient and they fire half?
Zero sum game or fixed lump of work fallacy. Think second order effects - now that we spend less time repeating known methods, we will take on more ambitious work. Competition between companies using human + AI will raise the bar. Software has been cannibalizing itself for 60 years, with each new language and framework, and yet employment is strong.
New product that push that bar and can command a decent margin (and good staff salaries) as long as there's a business case/demand and feature-sets that currently command a decent margin will be available for dirt cheap prices (managed by one or two person outfits).
Your comment really got me thinking, it's time to upskill haha. Aside from biotech and robotics do you see any areas particularly ripe for this push?
For example, it the core field of innovation is biotech, there will be unexpected needs in downstream and upstream fields like medical tooling, biosensors, carbon capture and novel materials. Internet blossomed into a thousand businesses, I expect the same thing to happen again - we gain new capabilities, they open up demand for new products, so we get new markets and industries. Desires always fill up the existing capability space like a gas fills a room.
It's probably true, but just not for SWEs.
Many roles will go the way of secretarys; the cost of making an administrative tool will decrease to the point where there is less need for a specialised role to handle it.
The question is going to be about the pace of disruption, is there something special about these new tools?
Just like robo-taxis are supposed to be driving us around or self driving cars. Not to mention the non-fiat currency everyone can easily use to buy goods nowadays.
Waymo was providing 10,000 weekly autonomous rides in August 2023, 50,000 in June 2024, and 100,000 in August 2024.
Not everything has this trajectory, and it took 10 years more than expected. But it's coming.
Not saying AI will be the same, but underestimating the impact of having certain outputs 100x cheaper, even if many times crappier seems like a losing bet, considering how the world has gone so far.
Waymo is a great example, actually. They serve Phoenix, SF and LA. Those locations aren’t chosen at random, they present a small subset of all the weather and road conditions that humans can handle easily.
So yes: handling 100,000 passengers is a milestone. The growth from 10,000 to 100,000 implies it’s going to keep growing exponentially. But eventually they’re going to encounter stuff like Midwest winters that can easily stop progress in its tracks.
About driverless cars, new tech adoptions often start slow, until the iceberg tips and then it's very quick change. Like mobile phones today.
I remember thinking before smartphones that had entire-day battery and good touchscreens: These people really think population will use phones more than desktop computers? Here we are.
I wouldn't say so, because the cars are not at all autonomous in our understanding of autonomous.
The cars aren't making all their decisions in real-time like a human driver. They, Waymo, meticulously mapped and continue to map every inch of the traversable city. They don't know how to drive, they know how to drive THERE.
It would be like if I went to the DMV to take a driving test. I would fail immediately, because the parking lot is not one I've seen and analyzed before.
"true" self driving is not possible with our current implementation of automobiles. You cannot safely mix automobiles that self-drive with human drivers. And the best solution is to converge towards known routes. We don't even necessarily how to program the routes - we can instead encode them in the road itself.
It might occur to you that I'm speaking about rail. The reality is it's trivial to automate rail systems, but the variables of free-form driving can't be automated.
In the first case there are inherent safety constraints preventing it and thus its not available to public to freely use. It's highly regulated. With GPT to writing code, it is already generally available and in heavy use. There are no such life-and-death concerns in the main.
In the second case there are inherent technical challenges to using non-fiat currency and the fx volatility with fiat is wild. There are also barriers and inconveniences to conversion. With GPT writing code, the user can review for quality and still be many x more productive and there is far fewer fees and risk of loss.
It's risky to take two failed or slow innovations and assume that all innovations will be failed or slow.
On a small subsection of US roads, British roads for example don’t make any sense.
However, generally I think being a software developer might be not a career in 10 years which is terrible to think about. Designer too. And all of this is through stealing peoples work as their own.
These models are not repositories or archives of others work that they simply stitch together to create output. It's more accurate to say that they view work and then create an algorithm that can output the essence of that work.
For image models, people are often pretty surprised to learn that they are only a few gigabytes in size, despite training on petabytes of images.
Non-general AI won't cause mass unemployment, for the same reason previous productivity-enhancing tech hasn't. So long as humans can create valuable output machines can't, the new, higher-output economy will figure out how to employ them. Some won't even have to switch jobs, because demand for what they provide will be higher as AI tools bring down production costs. This is plausible for SWEs. Other people will end up in jobs that come into existence as a result of new tech, or that presently seem too silly to pay many people for — this, too, is consistent with historical precedent. It can result in temporary dislocation if the transition is fast enough, but things sort themselves out.
It's really only AGI, by eclipsing human capabilities across all useful work, that breaks this dynamic and creates the prospect of permanent structural unemployment.
We do have emplyoment problems arguably caused by tech, currently the bar of minimum viable productivity is higher than before in a lot of countries. In western welfare states there aren't jobs anymore for people who were doing groundskeeper ish things 50 years ago (apart from public sector subsidized employment programs).
We need to come up with ways of providing meaningful roles for the large percentage of people whose peg shape doesn't fit the median job hole.
The irregularities of many real-world problems will keep even humans of low intelligence employable in non-AGI scenarios. Consider that even if you build a robot to perform 99% of the job of, say, a janitor, there's still that last 1%. The robot is going to encounter things that it can't figure out, but any human with an IQ north of 70 can.
Now, initially this still looks like it's going to reduce demand for janitors by 99%. So it's still going to cause mass unemployment, right? Except, it's going to substantially reduce the cost of janitorial services, so more will be purchased. Not just janitorial services, of course. We'll deploy such robots to do many things at higher intensity than we do today, and as well as many things that we don't do at all right now because they're not cost effective. So in equilibrium (again, the transition may be messy), with 99% automation we end up with an economy 100x the size, and about the same number of humans employed.
I know this sounds crazy, but it's the historical norm. Today's industrialized economies already have hundreds of times the output of pre-industrial economies, and yet humans mostly remain employed. At no point did we find that we didn't want any more stuff, actually, and decide to start cashing out productivity increases as lower employment rather than more output.
We're quickly approaching how smart the average human can get, that's the problem and what sets this apparant from the historical norm.
This worked before because commonly people couldn't even read or do basic math. We figured that out and MUCH more and now everyday people are taught higher think for many years. People, today, are extremely smart as compared to all of human history.
But IMO we've kind of reached a ceiling. We can't push people further than we already have. In the last two decades this became very evident. Now almost everyone goes to college, but not all of them make it through.
The low-end has been steadily rising, that now for 20 bucks an hour you need a degree. That's with our technology NOW. We're already seeing the harmful effects of this as average or below-average people struggle to make even low incomes.
It's true that humans will always find new stuff to do. The issue is as time goes on this new stuff goes higher and higher. We can only push humans, as a whole, so far.
If an AI can do my job, why would my employer fire me? Why wouldn’t they be excited to get 200% productivity out of me for the marginal cost of an AI seat license?
A lot of the predictions of job loss are predicated on an unspoken assumption that we’re sitting at “task maximum” so any increase in productivity must result in job loss. It’s only true if there is no more work to be done. But no one seems to be willing or able or even aware that they need to make that point substantively—to prove that there is no more work to be done.
Historically, humans have been absolutely terrible at predicting the types and volumes of future work. But we’ve been absolutely incredible at inventing new things to do to keep busy.
> If an AI can do my job, why would my employer fire me? Why wouldn’t they be excited to get 200% productivity out of me for the marginal cost of an AI seat license?
They’d be excited at getting 100x of 100% output out an AI for 20 dollars a month and laying you off as redundant. If you aren’t scared of the potential of this technology you are lying to yourself.
“Fixed lump of work fallacy” as noted by commenter above.
If a company can get 100% more output they don’t fire half their people so they stand still/get no additional productivity gain.
You're relying on theoretical work needed by employers to be unlimited. You're also assuming all of this additional work can't be handled by an LLM.
First of all fixed lump of work is not a fallacy. We do know there is a limit as there's limits in the amount of work human brains can even comprehend. A limit exists. We don't know where exactly this limit is, but a limit DOES exist and an LLM may possibly cover that limit.
Second, you have to assume that this "additional work" can't be handled by the LLM. How can you be sure? Did you think about what this work actually is? My first thought was "cleaning the toilets."
>What forum is this???
I assume it's a forum of people who don't base their lives off of concepts with buzzwords. “Fixed lump of work fallacy” is a fancy phrase for a fancy concept... that doesn't mean it's an actual fallacy or actually true. Literally you just threw that quote up there as if the slightly clever wording itself proves your point.
What Exactly is this additional work that will pop up once LLMs are around and so powerful they can do all human intellectual work? Can you even do a concrete/solid real-world analysis without jumping to vague hypotheticals covered by fancy worded conceptual quotations? The last guy used analogies as part of his logical baseline of reasoning. Wasn't convincing to me.
This assumes that the bottleneck to profitability is the limit of software engineers they can afford to hire.
If they’re happy with current rate of progress (and in many companies that is the case), then a productivity increase of 100% means they need half the current number of engineers.
Is the reason for development on features going slow usually the number of developers though? Nowhere I’ve worked has that really been the case, it’s usually fumbled strategic decision making and pivots.
And the “current rate” is competitively defined. So if AI can make software developers twice as productive, then the acceptable minimum “current rate” will become 2x faster than it is today.
A computer already does in seconds what it used to take many people to do. In fact the word “computer” was a job title; now it describes the machine that replaced those jobs.
Yet people are still employed today. They are doing the many new jobs that the productivity boost of digital computing created.
I don't know why people think analogies from the past predict or prove anything in the future. It's as if a different situation applies completely to the current situation via analogy EVEN though both situations are DIFFERENT.
The computer created jobs because it takes human skills to talk to the computer.
It takes very little skill to talk to an LLM. Why would your manager ask you to prompt an LLM to do something for you when he can do it himself? You going to answer this question with another analogy?
Just think reasonably and logically. Why would I pay you a 300k annual salary when a chatGPT can do it for nothing? It's pretty straightforward. If you can't justify something with a straightforward answer, likely you're not being honest with yourself.
Why don't we use actually evidence based logic to prove things rather then justify things by leaping over some unreasonable gap with some analogy. Think about the current situation, don't base your hope on a past situation and hope that the current situation will be the same because of analogy.
My job is not to do a certain fixed set of tasks, my job is to do whatever my employer needs me to do. If an LLM can do part of the tasks I complete now, then I will leave those tasks to the LLM and move on to the rest of what my employer needs done.
Now you might say AI means that I will run out of things that my employer needs me to do. And I'll repeat what I said above: you've got to prove that. I'm not going to take it on faith that you have sussed out the complete future of business.
Future or events that haven't happened yet can't be proven out because it's an unknown.
What we can do is make a logical and theoretical extrapolation. If AI progresses to the point where it can do every single task you can do in seconds, what task is there for you left to do? And how hard is the task? If LLMs never evolve to the point where they can clean toilets, well then you can do that, but why would the boss pay you 300k to clean the toilet?
These are all logical conjectures on a possible future. The problem here is that if AI continues on the same trendline it's traveling on now I can't come up with a logical chain of thought where YOU or I keep our 300k+ engineering jobs.
This is what I keep hearing from not just you, but a ton of people. That analogy about how technology only created more jobs before with no illustration of a specific scenario of what's going on here. Yeah if LLMs replace almost every aspect of human intellectual analysis, design, art and engineering what is there left to do?
Clean the toilet. I'm not even kidding. We still have things we can do but the end comes when robotics catches up and is able to make robots as versatile as the human form. That's the true end when the boss has chatGPT clean the toilet.
If they're high growth yes, if they're in the majority of businesses that are just trying to maximise profit with negligible or no growth then likely not.
When electricity got cheap - we use MORE electricity.
Think how many places you see shitty software currently.
My wife was just trying to use an app to book a test with the doctor - did not work at all. The staff said they know it doesn’t work. They still give out the app.
We are surrounded by awful software. There’s a lot of work to do- if it could be done cheaper. Currently only rich companies can make great software.
Well, that probably happens to some extent, but I am quite confident that some smaller shops will just say "Hey make an app that works 50% of the time and that's good enough." then fire half of the staff.
Oh, not just smaller shops, I have many issues with Android and other Google products -- from bugs to things that just don't work that have existed for decades, and there is no action on those over the years. Surely Google has the resources? Right? riiight?
This is a human problem, not a technology problem.
> We are surrounded by awful software. There’s a lot of work to do- if it could be done cheaper. Currently only rich companies can make great software.
Lots of the awful software is made by awfully rich companies - and lots of good software is made by bootstraped devs.
To mention some interesting examples, both Amazon and Google has gone from great to meh soon after they went from startups to entrenched market leaders.
I guess this is why I’m excited. AI will give smaller motivated teams a lot more firepower. One committed person can (or may soon be able to) take on the might of a big company.
These companies are making crap software because their scale makes them hard to compete with. They know there’s no other good options.
I think Sam Altman’s right that there’ll be a 1 person unicorn company at some point.
On the third point, I think we've always seen this happen even in massive shocks like the Industrial Revolution (and the Second Industrial Revolution with assembly lines etc. and the Computer Age)
It might be hard for people to retrain to whatever the new opportunities are though. Although perhaps somewhat easier nowadays with the internet etc.
The myth that the Industrial Revolution was a wonderful time is just that, a myth. The actual reality of the AI revolution will likely be the same. Record number of billionaires and record number of people in deep poverty at the same time.
Do people really think the Industrial Revolution was “a wonderful time”? Basically the first thought to comes to mind for me is massive migration to urban centers, along with huge amount of poverty and squalid living conditions and disassociation with your own labor. I feel like that was basically what was taught to me in High School too, not like some recently learned insight.
And I agree with you. Further, the argument about economic prosperity isn’t equal for everyone. And increased worker efficiency isn’t directly (or sometimes at all) linked to worker satisfaction or even increased wages.
I’ve heard some people say it. That economic disruption doesn’t matter because “all the pieces fall into place” eventually and the Industrial Revolution being an example.
Well, yeah but right now we're reaping many benefits from the industrial revolution. Malnutrition for sure. Not saying it's the same as the AI boom though.
Not trying to value life in general at all, just the nature of the jobs. You might reply "distinction without a difference," and well, the fact that you'd think so would be one of my points about the labor ;).
Personally, preindustrial life sounds pretty rough, but its all just apples and oranges! The future will continue to happen, to critique the present and how we got here is not to exhort the past (unless, you know, you are a particularly conservative person I guess).
> What if SWEs get 50% more efficient and they fire half?
This is kinda ironic in a thread that's basically about the AI hype landscape, but you've just reduced the amount of SWE "power" your example organization has there by 25%.
Buy stocks and try to own the means of production. Things are going to begin to flatten out in terms of salary or even decrease as competition increases due to productivity gains.
> SWE and CS are already in play.
What if SWEs get 50% more efficient and they fire half?
You know what happened last time we got 50% more efficient? It was when github and npm arrived. LLM are saving time and making us more efficient, but that's peanuts compared to the ability to just “download a lib that does X” instead of coding this shit on your own. And you know what happened after that? SWE position skyrocketed.
I tend to default to DT for any open-ended/creative problem. Write it down in Apple Notes, let it simmer on the back burner for a while and add ideas as they come.
My medical data is affected and I can't disagree more.
As well pharma is not interested in finding cures, for obvious reasons. I am getting all the downsides with no of the upsides.
I heard the no moat theory before and I don't get it.
The open source models are about a year or two behind the latest ChatGPT in terms of quality. That means companies will always be willing to pay premium to use ChatGPT and not rely on open source.
So even if/when Google and Apple (and perhaps Meta) catch up in terms of A.I quality, there's still so much money to be made for OpenAI.
One interesting by product of late game capitalism like this is as more and more jobs get destroyed due to A.I, so will subscriptions. So it might be a mixed bag in the end for the tech giants if there's no real economy to buy the products anymore, but we're a long way from there.
It’s the “we will make this so easy for you that you never want to switch” moat. Definitely akin to Slack, which also has the integration glue to keep you on their platform. Even though there are many Slack alternatives now that are really great, most companies on Slack will opt to stay there rather than invest in migrating.