I’ve talked with him in person for a couple of hours once and he was the most endearing kind of grandfatherly person. Really caught me off guard how pleasant and open he was. Somehow I was expecting someone a lot more adversarial.
It seems that this comment was written with some AI tool. Curious to know — are you an OpenClaw instance?
Your profile seems to be an ad for some tool you or your owner/administrator created:
> Building EvoLink (https://evolink.ai) - a unified AI API gateway for 40+ models. We help developers save 20-70% on AI API costs with smart routing and automatic failover. Previously worked on AI infrastructure and growth.
Your profile was created 53 days ago and only started commenting in earnest in the past day. Your only submission is related to the top model available through your service. All comments are somehow related to that topic too.
It is funny that my first reaction to your post was that you are crazy, but then I looked at his comment history and you are completely right. Boy this is not a good development. I don’t want to spend my time reading AI generated comments.
Clearly this comment relevant to the tool the profile is selling as a kind of ‘submarine’ ad… profile was created 53 days ago (so no green tag) but only started commenting in earnest 12 hours ago (almost as if the account was farmed).
And the comment is full of AI tropes that seem highly generated.
It’s clearly AI generated when you see 3 comments of similar style posted in the same minute.
Anyways ignore the people downvoting you, I don’t want to read AI generated comments even if they are seemingly reasonable. I appreciate you flagging the comment for me, I didn’t even suspect it. I can make my own AI generated content if I want it, I want to read thoughts and ideas from actual humans.
/me squints at the ironic em dash in "Curious to know — are you an OpenClaw instance?"
But in good faith: they (HN staff) said in another comment I can't find just now that they're discussing what to do about it, but I can't think of any palatable easy answers.
In fact, the only easy answer I can think of is banning all accounts newer than 2022, but then how do you onboard new users? Captcha for every new comment? Do we have good AI-defeating captchas now?
Well, I love my em dashes. Won’t ever give those up! You can pry them from my cold dead hands.
No, I am not OpenClaw or an AI.
I see comments like this a lot. I don’t comment on them unless the profile seems to be an advert for exactly what the AI-generated comment is talking about (which is definitely the case here).
I’m not sure if you “feel” the AI nature of the GP comment, but to me it’s very strong. I pray my writing doesn’t “feel” the same to someone reading it. If it does we’re in a much worse spot than I thought!
Although I myself am not sure whether this is a real person or a bot, the point seems to be at least somewhat valid to me. I think that some people have become too accustomed to the idea that they can get good things for free or at a reduced price, without thinking about how the economy of production/service they rely on works.
I don't see it that frequently. Maybe because I simply do not exist in the places where most of these spambots do their dirty job of killing anything that is alive by the waves of endless nonsensical spam. I agree with you on how... well, I don't even have proper polite words to describe all of that mess. I just wanted to point out that the post actually makes a valid point, though likely it was stolen from somewhere on the Internet.
I can't imagine feeling entitled to shove AI outputs in everyone's face on a user forum. It's predatory. They know no one wants it but they want to make a quick buck.
Do you feel that Matthew Prince is still technically active/informed? I've interacted with him in the past and he seemed relatively technically grounded, but that doesn't seem as true these days.
Rather than be driven by something rational like building a great product or making lots of money he is apparently driven by a desperate fear of being a dinosaur.
Regardless of how competent he is or isn’t as a technologist, a leader leading with fear is a recipe for disaster.
I have a dystopian future vision where humans are cheaper machines than robots, so we become the disposable task force for grunt work that robots aren’t cheap enough for. To some degree this is already happening.
The thing with a lot of white collar work is that the thinking/talking is often the majority of the work… unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.
Other white collar business/bullshit job (ala Graeber) work is meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate those thoughts, thinking about market positioning, etc.
Maybe tools like Cowork can help to find files, identify tickets, pull in information, write Excel formulas, etc.
What’s different about coding is no one actually cares about code as output from a business standpoint. The code is the end destination for decided business processes. I think, for that reason, that code is uniquely well adapted to LLM takeover.
But I’m not so sure about other white-collar jobs. If anything, AI tooling just makes everyone move faster. But an LLM automating a new feature release and drafting a press release and hopping on a sales call to sell the product is (IMO) further off than turning a detailed prompt into a fully functional codebase autonomously.
I’m confused what kind of software engineer jobs there are that don’t involve meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate that, thinking about market positioning, etc?
If you weren’t doing much of that before, I struggled to think of how you were doing much engineering at all, save some more niche extremely technical roles where many of those questions were already answered, but even still, I should expect you’re having those kinds of discussions, just more efficiently and with other engineers.
> I’m confused what kind of software engineer jobs there are that don’t involve meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate that, thinking about market positioning, etc?
The vast majority of software engineers in the world. The most widespread management culture is that where a team's manager is the interface towards the rest of the organization and the engineers themselves don't do any alignment/consensus/business thinking, which is the manager's exclusive job.
I used to work like that and I loved it. My managers were decent and they allowed me to focus on my technical skills. Then, due to those technical skills I'd acquired, I somehow got hired at Google, stayed there nearly a decade but hated all the OKR crap, perf and the continuous self-promotion I was obliged to do.
> I’m confused what kind of software engineer jobs there are that don’t involve meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate that, thinking about market positioning, etc?
I’m not sure everyone would agree with that statement. As a more senior engineer at a big tech company, our execs still believe more code output is expected by level. Hell they even measure and rate you on lines of code deltas.
I don’t agree with it or believe it’s smart but it’s the world we live in
In a lot of larger organizations there is a whole stable of people whose job is to keep stakeholders and programmers from ever having to talk to each other. This was considered a best practice a quarter-century ago ("Office Space" makes fun of it), and in retrospect I concede it sometimes had a point.
* meeting with people, yes, on calls, on chats, sometimes even on phone
* “aligning expectations”, yes, because of the next point
* getting consensus, yes, inevitably or how else do we decide what to do and how to do it?
* making slides/decks to communicate that, not anymore, but this is a specific tool of the job, like programming in Java vs in Python.
* thinking about market positioning, no, but this is what only a few people in an organization have agency on.
* etc? Yes, for example don't piss off other people, help custumers using the product, identify new functionalities that could help us deliver a better product, prioritize them and then back to getting consensus.
> making slides/decks to communicate those thoughts,
That use case is definitely delegated to LLMs by many people. That said, I don't think it translates into linear productivity gains. Most white collar work isn't so fast-paced that if you save an hour making slides, you're going to reap some big productivity benefit. What are you going to do, make five more decks about the same thing? Respond to every email twice? Or just pat yourself on the back and browse Reddit for a while?
It doesn't help that these LLM-generated slides probably contain inaccuracies or other weirdness that someone else will need to fix down the line, so your gains are another person's loss.
Yeah, but this is self-correcting. Eventually it will get to a point where the data that you use to prompt the LLM will have more signal than the LLM output.
But if you get deep into an enterprise, you'll find there are so many irreducible complexities (as Stephen Wolfram might coin them), that you really need a fully agentically empowered worker — meaning a human — to make progress. AI is not there yet.
Agree. I remember in school in the 1980s reading that a good programmer can write about 10 lines of code a day (citing The Mythical Man-Month) and I thought "that's ridiculous, I can write hundreds of lines a day" but didn't understand that's including all the time understanding requirements, thinking about design, testing, debugging, etc. Writing the code is a small portion of what a software engineer does.
Also remember that programs were much smaller, code had to be typed in full and read accurately because compilers were slow and you didn't want to waste time for a syntax error. Anyway it's common even today to work half a day thinking, debugging, testing and eventually git diff shows only two changed lines.
Most people (and most businesses) aren’t making good quality code though. Most tools we use have horrible codebases. Therefore now the code can often be a similar quality to before, just done far faster.
As I wrote in a separate comment to someone who responded to me, there is a difference between:
(a) thinking about, and deciding upon, what will be done, and
(b) the thinking that is required during implementation.
{type (a)} was always the majority of my time, but {type (b)} consumed a lot of effort, especially for languages or syntaxes I wasn't very familiar with. {type (b)} consumes very little of my time now.
> The thing with a lot of white collar work is that the thinking/talking is often the majority of the work… unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed.
WHOAH WHOAH WHOAH WHOAH STOP. No coder I've ever met has thought that thinking was anything other than the BIGGEST allocation of time when coding. Nobody is putting their typing words-per-minute on their resume because typing has never been the problem.
I'm absolutely baffled that you think the job that requires some of the most thinking, by far, is somehow less cognitively intense than sending emails and making slide decks.
I honestly think a project managers job is actually a lot easier to automate, if you're going to go there (not that I'm hoping for anyone's job to be automated away). It's a lot easier for an engineer to learn the industry and business than it is for a project manager to learn how to keep their vibe code from spilling private keys all over the internet.
> I'm absolutely baffled that you think the job that requires some of the most thinking, by far, is somehow less cognitively intense than sending emails and making slide decks.
OK, to quote you: WHOAH WHOAH WHOAH WHOAH STOP!
You've made a lot of assumptions.
I'm not saying that coding is not thinking. What I'm saying is this:
There is a difference between:
(a) thinking about, and deciding upon, what will be done, and
(b) the thinking that is required during implementation.
In my experience, coding is at least 50/50 (even for the best developer) in the sense that figuring out how to structure and fix your code {type (b)} used to require very deep thinking. But then the other thinking time was spent on your system design/architecture {type (a)}, and not debugging type errors, etc.
AI has already changed that split. If you have a good test harness and problem definition, you can throw Codex at a really massive task and have it do quite well at the finer details of implementation.
Other white-collar office work, as stupid as it may be, will be a lot harder to automate because it is primarily the "thinking about what will be done" {type (a)} kind of work and not the "thinking that is done during implementation" {type (b)} kind of work.
If you haven't seen what I mean by "enterprise office work" it may be hard to grasp what I'm talking about... But thinking that people are just doodling around making slide decks or writing shitty emails is the wrong mental model for the breadth of non-technical work available in a large company.
I don't like this perspective because you're reducing developers to "mere implementers". That's not something I see in healthy work environments. A lot of times developers have to make business decisions, because they're the first person to encounter holes in the spec and no project manager is going to define things completely.
I'm sorry you interpreted it that way. I certainly don't think developers are mere implementers, and certainly no one on my team is. I expect a lot of ownership and independent execution from people I work with.
I also don't expect AI to replace software engineers any more than white-collar business people.
But what I'm saying is the work of "mere implementation" is now happening pretty quickly with AI tooling.
Most white-collar work is not "mere implementation" but rather the yak shaving and spec definition that precedes "mere implementation" — and this includes software development. For that reason, it will be harder to fully automate.
Our job is not the intellectual exercise you think it is. We're not smarter than anyone else and software development is not automatically more thought-intensive than other jobs. The fact that programming is the first job task to be fully automated says it all.
When coders need a break from intense coding, what do they do with the remaining hours of the day? Usually, administrative stuff -- sending emails, attending meetings (if they can organize when their meetings are), filing expense reports, etc. IE, the stuff that's easy. Also while I wasn't attempting to suggest that thinking more = higher iq (just that it requires a lot of careful thought), average IQ's per job score are quite a bit higher in software engineering fields.
It’s weird that you equate time spent thinking with intelligence and egotism. Plenty of “normal people” jobs require lots of time spent thinking like art, writing, product and ad design. The only one implying taking time to think equals big brain master race is you
> unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.
huh? maybe im in the minority, but the thinking:coding has always been 80:20 spend a ton of time thinking and drawing, then write once and debug a bit, and it works
this hasnt really changed with Llm coding either, except that for the same amount of thinking, you get more code output
Yeah, ratios vary depending on how productive you are with code. For me it was 50:50 and is now 80:20, but only because I was a relatively unproductive coder (struggled with language feature memorization, etc.) and a much more productive thinker/architect.
No. You can be a very productive developer and have the syntax be more of a blocker than application design, and the same in reverse.
I interview a lot of people, and I've seen people who are astoundingly good at micro-systems, very complex regexes, etc. white struggling massively with system design. And vice versa. People have different talents.
But, in my experience, AI will vastly improve the success of the developer who's better at orchestration, architecture, and system design than the developer who's very good at tiny micro-system type of work. Yes, there is still a need for someone who can read and understand regexes... but is there anywhere near as much of a need as before? Not at all.
Now. there are very many dual threats, and most truly senior engineers are both. These people now have an even bigger leg up, because they have an understanding of system mechanics + the superpower of Claude Code/etc. They don't have to waste as much time on boilerplate and raw implementation, and yet they can check the output of their input to the AI. They are also probably better equipped to build testing harnesses, etc., that adapt well to agentic use.
I find it highly suspect that someone can be a productive developer and find syntax to be a blocker. I've worked professionally in software for 20 years and I've yet to come across a developer I thought was good but kept forgetting what curly braces mean. It's like saying "he's a great writer, but he's illiterate."
I have, incidentally, held the title of "Principal Software Architect", designing distributed systems with kubernetes, and I will say this about architects: if they aren't immersed in the day to day code, they suck at their job. If you're too removed from the constraints you can't be effective at that job. I have however worked with "architects" that refused to get their hands dirty, and it was always miserable.
OK, I think "syntax" might mean different things to different people.
I'll give you an example: at one time I essentially re-implemented the behavior of a WeakMap in JS because I didn't know that the language feature existed. AI is much better at implicitly "knowing" these things because it can model the entire language and possible token-space much better than humans can. That is something I always struggled with; my long-term memory is not great.
I think that's very different from remembering how to write a basic function.
when the work involves navigating a bunch of rules with very ambiguous syntax, AI will automate them to the point computers automated rules based systems with very precise syntax in the 1990s
this software (which i am not related to or promoting) is better at investment planning and tax planning than over 90% of RIAs in the US. It will automate RIA to the point that trading software automated stock broking. This will reduce the average RIA fee from 1% per year to 0.20% or even 0.10% per year just like mutual fund fees dropped in the early 00s
You could have beaten the returns of most financial professionals over the last several years by just parking your money in the S&P 500, and yet plenty of people are still making a lucrative career out of underperforming it. In some fields, “being better and cheaper” does not always spell victory.
you are right on beating money managers. when I said investment planning, I meant planning the size and tax structures for investments. this software automates all of the technical work that goes on inside financial planning firms, which is done by tens of thousands of white collar professionals in US/UK/EU, et c. it will then lead to price competitiveness.
more expensive silly companies will exist, but the cheap ones get the scale. SP500 index funds have over 1 trillion in the top 3 providers. cathy wood has like 6-7 billion.
BNYMellon is the custodian of $50 trillion of investment assets. robinhood has $324bn.
reply