Maybe I'm just getting old, but I don't understand the appeal of the always-on AI assistant at all. Even leaving privacy/security issues aside, and even if it gets super smart and capable, it feels like it would have a distancing effect from my own life, and undermine my own agency in shaping it.
I'm not against AI in general, and some assistant-like functionality that functions on demand to search my digital footprint and handle necessary but annoying administrative tasks seems useful. But it feels like at some point it becomes a solution looking for a problem, and to squeeze out the last ounce of context-aware automation and efficiency you would have to outsource parts of your core mental model and situational awareness of your life. Imagine being over-scheduled like an executive who's assistant manages their calendar, but it's not a human it's a computer, and instead of it being for the purpose of maximizing the leverage of your attention as a captain of industry, it's just to maintain velocity on a personal rat race of your own making with no especially wide impact, even on your own psyche.
Totally agree. Sounds some envision want some level of Downton Abbey without the humans as service personal. A footman / maid in every room or corner to handle your requests at any given moment.
No matter how useful AI is and will become - I use AI daily, it is an amazing technology - so much of the discourse is indeed a solution looking for a problem. I have colleagues suggesting on exactly everything "can we put an MCP in it" and they don't even know what the point of MCP is!
It's the rat race. I gotta get my cheese, and fuck you, because you getting cheese means I go hungry. The kindergarden lesson on sharing got replaced by a lesson on intellectual property. Copyright, trademark, patents, and you.
Or we could opt out, and help everyone get ahead, on the rising tide lifts all boats theory, but from what I've seen, the trickle of trickle down economics is urine.
Yeah I think being annoyed by software is far more prevalent than wishing for more software. That said, I think there is still a lot of room for software growth as long as it's solving real problems and doesn't get in people's way. What I'm not sure about is what will the net effect of AI be overall when the dust settles.
On one hand it is very empowering to individuals, and many of those individuals will be able to achieve grander visions with less compromise and design-by-committee. On the other hand, it also enables an unprecedented level of slop that will certainly dilute the quality of software overall. What will be the dominant effect?
I sort of agree the random pontification and bad analogies aren't super useful, but I'm not sure why you would believe the intent of the AI CEOs has more bearing on outcomes than, you know, actual utility over time. I mean those guys are so far out over their skis in terms of investor expectations, it's the last opinion I would take seriously in terms of best-effort predictions.
I wouldn't be surprised if someone had AI write a bot to post a complaint on every HN thread about how the articles smells like AI slop. It's so tiresome. Either the article is interesting and useful or not, I don't really care if someone used AI to write it.
FWIW: When I accuse folks of it, take it more seriously - we literally wrote a paper (ICLR 2026) on deslopping language models: https://arxiv.org/abs/2510.15061
This is absolute nonsense. The app stores are already saturated with tons of free apps that no one uses. Sure the numbers are up—10x of infinity is still infinity—and the reason Apple doesn't care is because this is just the natural end game of their strategy to commoditize their complement.
When it comes to software subscriptions, the bar is just that much higher. Not only do you have to pass the threshold for someone to even adopt another app/website/brand, but now you have to provide enough utility to pay for it. Claude spitting out code for a good-enough clone of an app doesn't come anywhere near the threshold. An agent that can write the code, buy a domain, provision and maintain the database, and submit the app to the app store gets closer, but now it's not looking so cheap anymore, moreso in terms of your time commitment as defacto product manager than actual tokens and hosting costs.
The actual disruption of SaaS apps will come from agents that are capable of solving problems autonomously in a different way such that you don't even need the SaaS. I'm sure we'll get there in time, but not without a lot of data integrity and security issues, and rogue agent fuckups along the way.
I gotta say you really nailed a solid explanation for what I felt reading the OA but would not have been able to articulate it this clearly.
As someone who personally had a history of wanting to be right, sometimes at the expense of being effective, this is a lesson worth taking to heart.
What I’ve learned is that raw engineering chops and deep end-to-end thinking is highly valued if and only if you understand where leadership is trying to go and you bring people along in your vision. If you pitch your boss and they say no, you need to take it to heart and understand why, if you plow ahead vowing to show how right you were you are forcing them into an awkward position where you can only lose.
A lot of replies in the thread siding with the original author and indignant on their own terms about how they’ve been wronged by “corrupt” leaders. But this betrays a misunderstanding of how large orgs work. The nature of success is you have to subvert yourself to the whims of the organization, and only stick your neck out to challenge the status quo when you have sufficient air cover from someone higher up who believes in you. Corporations are often dysfunctional and anyone working within them can clearly see the flaws, but you’ve got to be clear eyed about what influence you have, and even then, pick your battles, or you’ll be rejected like an immune response from the organization.
Is it really fair to saddle the conscientious objectors with this critique? What about the people that stay and continue to profit exponentially as the negative outcomes become more and more clear? Are the anti-AI and anti-tech doomers who would never in a million years take a tech job actually more impactful in mitigating harms?
To be clear, I agree with the problem from a systemic perspective, I just don't agree with how blame/frustration is being applied to an individual in this case.
Is that the right word for it? I feel that a "conscious objector" is a powerless person whose only means of protesting an action is to refuse to do it. This researcher, on the other hand, helped build the technology he's cautioning about and has arguably profited from it.
If this researcher really thinks that AI is the problem, I'd argue that the other point raised in the article is better: stay in the organization and be a PITA for your cause. Otherwise, for an outside observer, there's no visible difference between "I object to this technology so I'm quitting" and "I made a fortune and now I'm off to enjoy it writing poetry".
Nuremberg/just following orders might fly if we were talking about a cashier at Dollar General.
This is a genius tech bro who ignored warnings coming out institutions and general public frustration. Would be difficult to believe they didn't have some idea of the risks, how their reach into others lives manipulated agency.
Ground truth is apples:oranges but parallels to looting riches then fleeing Germany are hard to unsee.
Um, I guess this might useful to some number of readers, but I don't think it's universal and I don't think it's a secret—more like its one of a few dozen pithy focus hacks that regularly make their way through the blogosphere and social media for those interested in "productivity".
To try my hand at reductive advice, I would say this: know your strengths and what work you do has the most value. The structure exists to serve the work and not the other way around. Habits and processes can serve the work, but can quickly become a form of procrastination for certain types of personalities. Reading about productivity on the internet will not generally make you more productive. Only through honest self-reflection can you actually improve your personal productivity and impact.
True enough, though I think Tailwind suffered something of a black swan event of having lifetime pricing plus AI coding assistants hitting an inflection point that immediately and thoroughly decimated the value prop of their core monetized product.
Putting on a PM hat is something I've been doing regularly in my engineering career over the last quarter century. Even as a junior (still in college!) at my first job I was thinking about product, in no small part because there were no PMs in sight. As I grew through multiple startups and eventually bigger brand name tech companies, I realized that understanding how the details work combined with some sense of what users actually want and how they behave is a super power. With AI this skillset has never been more relevant.
I agree your assessment about the value of good PMs. The issue, in my experience, is that only about 20% (at most) are actually good. 60% are fine and can be successful with the right Design and Engingeering partners. And 20% should just be replaced by AI now so we can put the proper guardrails around their opinions and not be misled by their charisma or whatever other human traits enabled them to get hired into a job they are utterly unqualified for.
I'm not against AI in general, and some assistant-like functionality that functions on demand to search my digital footprint and handle necessary but annoying administrative tasks seems useful. But it feels like at some point it becomes a solution looking for a problem, and to squeeze out the last ounce of context-aware automation and efficiency you would have to outsource parts of your core mental model and situational awareness of your life. Imagine being over-scheduled like an executive who's assistant manages their calendar, but it's not a human it's a computer, and instead of it being for the purpose of maximizing the leverage of your attention as a captain of industry, it's just to maintain velocity on a personal rat race of your own making with no especially wide impact, even on your own psyche.
reply