"This was such an eye-opening breakdown. Most people only see the big retail win, not the insane operational load, cashflow pressure, and forecasting stress behind it. Really appreciate how transparently you shared the real founder experience. Huge respect for the execution and the growth you achieved."
My rule on this is that you can only judge your coworker’s productivity never your own. People are horrible at judging their own productivity, and AI makes it really easy to just dump a bunch of work on someone else.
Personally I think it’s “None of the Above”. Frankly I own several projects that I really wish AI would do better at maintaining or enhancing (please don’t reply with “you’re holding it wrong messages”).
At least with my org and a lot of my friends companies, they’ve just kind of stopped building (non-AI) features? Not completely, but like a 70% reduction. And that’s apparently fine with the market, since AI promises infinite productivity in the future. As an example Amazon had a massive outage recently, like a world-scale, front page news outage. Then they followed it up with a massive layoff. In 2018 logic the markets probably would have been like “this company is fucked people are going to start moving out of AWS if they can’t promise reliability”. In 2025 logic, it barely even registers. I guess the assumption is that even with less employees they’ll be AWS can be even more stable in the future because of better AI. Even companies who _should_ be more concerned with reliability aren’t because _they’re too concerned about their next big AI move_.
I guess in the end I still think it’s about AI but more about how companies are reacting to AI rather than replacing too many jobs.
As a long time proponent of reasonable-ism I disagree with a lot of this. The assumption that a lot of our problems stem from 2 sides just seeing an issue differently is just nonsense in this day and age.
The big Problem, is that one side has slid heavily into authoritarianism, and the other side is completely ill-equipped to fight it.
On any particular issue, the right will say whatever gets them more Power, and the left will bring out some sort of philosophy professor to try and pick apart the nuances of the conversation.
For real. Honestly, the idea that all people are good and caring and just see things differently is the comforting lie.
I really really want to believe it. You get to feel happy about humanity, smarter than all the hysterical people, etc,
It took so so so much evil from the Republicans to convince me that they are Not a reasonable side, do Not warrant any consideration, and that people who follow them Are morally corrupt.
I was thinking about this while reading the article. 15 years ago I prided myself on actively searching out opposing views and engaging with them. I still do that in the small scale, talking with coworkers or friends who have opposing views, but where the fuck can you find any reasonable conservative commentators these days. What are the most prominent voices? People like Ben Shapiro and Steven Crowder? They don't have an intellectually honest bone in their body.
How the hell can you even get a balanced view in terms of news/media you consume when one side is dominated by lunatics and bad actors.
I'm not familiar with Ben Shapiro or Steven Crowder but what makes you so sure that their point of view is unreasonable? It seems you agree that their viewpoint is opposing, which likely means their premise are different, so isn't it entirely possible that their conclusion is reasonable and logical when one starts from their position?
Also, there isn't one source that can represent the "conservative" viewpoint because there isn't one conservative viewpoint. There are many factions within the Republican party with sometimes shockingly different points of view. Just like the Democratic party representing the "liberal" agenda.
I could just as easily ask, where is the one source I go to get an understanding of the liberal agenda? (Just a rhetorical question, I actually don't follow the news and don't plan to.)
Sure. When you start from the axiom that people unlike yourself are evil and must be harmed at all costs, you might end up near those positions. However, most people don't have that axiom, and we shouldn't attempt to compromise those who do, but should fence them (as in a distributed system fencing failed nodes).
Culturally we are going through a phase where thought is getting massively devalued. It’s all well and good to say “I’m using AI responsibly”, but it won’t matter if at the end of the day no one values your opinion over whatever ChatGPT spewed out.
> the big question is if experienced product management types can pick up enough coding technical literacy to work like this without programmers
I have a strong opinion that AI will boost the importance of people with “special knowledge” more than anyone else regardless of role. So engineers with deep knowledge of a system or PMs with deep knowledge of a domain.
A while back someone made a post or comment about how they managed to vibe code a huge PR (1000+ of lines) to an open source project. They said they didn’t have time to read through the code but instead used tests to ensure the code was doing the right thing. Then it came out that there was a very lengthy review period where the maintainers had gone through the PR and helped fix the (rather significant) issues with it. So while the authors “didn’t have time” to review their own work the burden was shifted onto the maintainers.
I think this is why often new languages get so hyped early on. A bunch of inexperienced people get to leave all the boring “best practices” back in their old languages
Once in a lecture one of my friends asked “You keep mentioning Cooper Netties, who is Cooper Netties?” Everyone was very confused until someone figured out she was asking about Kubernetes.