Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Engineers don't try because they think they can't.

This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.

There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.

I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.

So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.



Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out. Some models are better at some tasks, some tools are better at finding the right text and connecting it to the right actions, some tools provide a better wrapper around the text-generation process. Certain jobs are very easy for AI to do, others are impossible (but the AI lies about them).

A lot of us tried it and just said, "huh, that's interesting" and then went back to work. We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.

There's just not much value in chasing the endless AI news cycle, constantly believing that I'll fall behind if I don't read the latest details of Gemini 3.1 and ChatGPT 6.Y (Game Of The Year Edition). The engineers I know who use AI don't seem to have any particular insights about it aside from an encyclopedic knowledge of product details, all of which are changing on a monthly basis anyway.

New products that use gen AI are — by default — uninteresting to me because I know that under the hood, they're just sending text and getting text back, and the thing they're sending to is the same thing that everyone is sending to. Sure, the wrapper is nice, but I'm not paying an overhead fee for that.


> Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out.

"Engineers don't try" doesn’t refer to trying out AI in the article. It refers to trying to do something constructive and useful outside the usual corporate churn, but having given up on that because management is single-mindedly focused on AI.

One way to summarize the article is: The AI engineers are doing hype-driven AI stuff, and the other engineers have lost all ambition for anything else, because AI is the only thing that gets attention and helps the career; and they hate it.


> the other engineers have lost all ambition for anything else

Worse, they've lost all funding for anything else.


Industries are built upon shit people built in their basements, get hacking


I think it should be noted that a garage or basement in California costs like a million dollars.


That was true before Crypto and AI.


Yes, it just puts the whole "I started Apple in my garage"-style narrative into context.


I am! No one's interested in any of it though...


You need to buy fake stars on github, fake download it 2 millions time a day and ask an AI to spam about it on twitter/linkedin.


ZIRP is gone, and so are the Good Times when any idiot could get money with nothing but a PowerPoint slide deck and some charisma.

That doesn't mean investors have gotten smarter, they've just become more risk averse. Now, unless there's already a bandwagon in motion, it's hard as hell to get funded (compared to before at least).


Are you sure it refers to that? Why would it later say:

> now believes she's both unqualified for AI work

Why would she believe to be unqualified for AI work if the "Engineers don't try" wasn't about her trying to adopt AI?


“Lost all ambition for anything else” is a funny way for the article to frame “hate being forced to run on the hampster wheel on ai, because an exec with the power to fire everyone is foaming at the mouth about ai and seemingly needs everyone to use it”


To add another layer to it, the reason execs are foaming at the mouth is because they are hoping to fire the as many people as possible. Including those who implemented whatever AI solution in the first place.


The most ironic part is that AI skills won't really help you with job security.

You touched on some of the reasons; it doesn't take much skill to call an API, the technology is in a period of rapid evolution, etc.

And now with almost every company trying to adopt "AI" there is no shortage of people who can put AI experience on their resume and make a genuine case for it.


Maybe not what the OP or article is talking about, but it's super frustrating recently dealing with non/less technical mgrs, PMs, etc who now think they have this Uno card to bypass technical discussion just because they vibe coded some UI demo. Like no shit, that wasn't the hard part. But since they don't see the real/less visible past like data/auth/security, etc they act like engineers "aren't trying", less innovative, anti-AI or whatever when you bring up objections to their "whole app" they made with their AI snoopy snow cone machine.


My experience too. They are so convinced that AI is magical that pushing back makes you look bad.

Then things don't turn out as they expected and you have to deal with a dude thinking his engineers are messing with him.

It's just boring.


Hmm, (whatever is in execs' head about) AI appears to amplify the same kind of thinking fallacies that are discussed in the eternal Mythical Manmonth essay, which was written like half a century ago. Funny how some things don't change much...


It reminds me of how we moved from "mockups" to "wireframes" -- in other words, deliberately making the appearance not look like a real, finished UI, because that could give the impression that the project was nearly done

But now, to your point: they can vibe-code their own "mockups" and that brings us back to that problem


> We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.

There's a lot of disconnected-from-reality hustling (a.k.a lying) going on. For instance, that's practically Elon Musk's entire job, when he's actually doing it. A lot of people see those examples, think it's normal, and emulate it. There are a lot of unearned superlatives getting thrown around automatically to describe tech.


Yes, much the way some used to (still do?) try and emulate Steve Jobs. There's always some successful person these types are trying to be.


This isn’t “unfair”, but you are intentionally underselling it.

If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

I’m not making any case for anything, but it’s just not that hard to get excited for something that sure does seem like magic sometimes.

Edit: lol this forum :)


> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right

I AM very impressed, and I DO use it and enjoy the results.

The problem is the inconsistency. When it works it works great, but it is very noticeable that it is just a machine from how it behaves.

Again, I am VERY impressed by what was achieved. I even enjoy Google AI summaries to some of the questions I now enter instead of search terms. This is definitely a huge step up in tier compared to pre-AI.

But I'm already done getting used to what is possible now. Changes after that have been incremental, nice to have and I take them. I found a place for the tool, but if it wanted to match the hype another equally large step in actual intelligence is necessary, for the tool to truly be able to replace humans.

So, I think the reason you don't see more glowing reviews and praise is that the technical people have found out what it can do and can't, and are already using it where appropriate. It's just a tool though. One that has to be watched over when you use it, requiring attention. And it does not learn - I can teach a newbie and they will learn and improve, I can only tweak the AI with prompts, with varying success.

I think that by now I have developed a pretty good feel for what is possible. Changing my entire workflow to using it is simply not useful.

I am actually one of those not enjoying coding as such, but wanting "solutions", probably also because I now work for an IT-using normal company, not for one making an IT product, and my focus most days is on actually accomplishing business tasks.

I do enjoy being able to do some higher level descriptions and getting code for stuff without having to take care of all the gritty details. But this functionality is rudimentary. It IS a huge step, but still not nearly good enough to really be able to reliably delegate to the AI to the degree I want.


The big problem is AI is amazing at doing the rote boilerplate stuff that generally wasn't a problem to begin with, but if you were to point a codebot at your trouble ticket system and tell it to go fix the issues it will be hopeless. Once your system gets complex enough the AI effectiveness drops off rapidly and you as the engineer have to spend more and more time babysitting every step to make sure it doesn't go off the rails.

In the end you can save like 90% of the development effort on a small one-off project, and like 5% of the development effort on a large complex one.

I think too many managers have been absolutely blown away by canned AI demos and toy projects and have not been properly disappointed when attempting to use the tools on something that is not trivial.


I think the 90/90 rule comes into play. We all know Tom Cargill quote (even if we’ve never seen it attributed):

The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.

It feels like a gigantic win when it carves through that first 90%… like, “wow, I’m almost done and I just started!”. And it is a genuine win! But for me it’s dramatically less useful after that. The things that trip up experienced developers really trip up LLMs and sometimes trying to break the task down into teeny weeny pieces and cajole it into doing the thing is worse than not having it.

So great with the backhoe tasks but mediocre-to-counterproductive with the shovel tasks. I have a feeling a lot of the impressiveness depends on which kind of tasks take up most of your dev time.


The other problem is that if you didn't actually write the first 90% then the second 90% becomes 2x harder since you have to figure out wtf is actually going on.


Right— that’s bitten me ‘whipping up’ prototypes. My assumption about the way the LLM would handle done minutiae ends up being wrong and finding out why something isn’t working ends up taking more time than doing it right the first time by hand. The worst part about that is you can’t even factor it in to your already inaccurate work time estimates because it could strike anywhere — including things you’d never mess up yourself.


The more I use AI for coding the more I realize that its a toy for vibe coding/fun projects. Its not for serious work.

When you work with a large codebase which have a very high complexity level, then the bugs put in there by AI will not worth the cost of the easily added features.


Many people also program and have no idea what a giant codebase looks like.

I know I don't. I have never been paid to write anything beyond a short script.

I actually can't even picture what a professional software engineer actually works on day to day.

From my perspective, it is completely mind blowing to write my own audio synth in python with Librosa. A library I didn't know existed before LLMs and now I have a full blown audio mangling tool that I would have never been able to figure out on my own.

It seems to me professional software engineering must be at least as different to vibe coding as my audio noodlings are to being a professional concert pianist. Both are audio and music related but really two different activities entirely.


I work on a stock market trading system in a big bank, in Hong Kong.

The code is split between a backend in Java (no GC allowed during trading) and C++ (for algos), a frontend in C# (as complex as the backend, used by 200 traders), and a "new" frontend in Javascript in infinite migration.

Most of the code was made before 2008 but that was the cvs to svn switch so we lost history before that. We have employees dating back 1997 who remembers that platform already existing.

It's made of millions of lines of code, hundreds of people worked on it, it does intricate things in 10 stock markets across Asia (we have no clue how the others in US or EU do, not really at least - it's not the same rules, market vendors, protocols etc)

Sometimes I need to configure new trading robots for random little thing we want to do automatically and I ask the AI the company is shoving down our throat. It is HOPELESS, literally hopeless. I had to write a review to my manager who will never pass it along up the ladder for fear of their response that was absolutely destructive. It cannot understand the code let alone write some, it cannot write the tests, it cannot generate configuration, it cannot help in anything. It's always wrong, it never gets it, it doesn't know what the fuck these 20 different repos of thousands of files are and how they connect to each other, why it's in so many languages, why it's so quirky sometimes.

Should we change it all to make it AI compatible, or give up ? Fuck do I know... When I started working on it 7 years ago coming from little startups doing little things, it took me a few weeks to totally get the philosophy of it all and be productive. It's really not that hard, it's just really really really really large, so you have to embrace certain ways of working (for instance, you'll do bugs, and you'll find them too late, and you'll apologize in post mortems, dont be paralized by it). AIs costing all that money to be so dumb and useless, are disappointing :(


There’s a reason why it’s so much better at writing JavaScript than HFT C++.

The latter codebase doesn’t tend to be in github repos as much.


> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

Or your job isn't what AI is good at?

AI seems really good at greenfield projects in well known languages or adding features.

It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.


> It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.

This is precisely my experience.

Having the AI work on a large mono repo with a front-end that uses a fairly obscure templating system? Not great.

Spinning up a greenfield React/Vite/ShadCN proof-of-concept for a sales demo? Magic.


> It's been pretty awful, IME, at working with less well-known languages

Well, there’s your problem. You should have selected React while you had the chance.


This shit right here is why people hate AI hype proponents. It's like it never crosses their mind that someone who disagrees with them might just be an intelligent person who tried it and found it was lacking. No, it's always "you're either doing it wrong or weren't really trying". Do you not see how condescending and annoying that is to people?


> If you haven’t had a mind blown moment with AI yet...

Results are stochastic. Some people the first time they use it will get the best possible results by chance. They will attribute their good outcome to their skill in using the thing. Others will try it and will get the worst possible response, and they will attribute their bad outcome to the machine being terrible. Either way, whether it's amazing or terrible is kind of an illusion. It's both.


You whole comment reads like someone who is a victim of hype.

LLMs are great in their own way, but they're not a panacea.

You may recall that magic is way to trick people into believing things that are not true. The mythical form of magic doesn't exist.


I wonder if this issues isn't caused by people who aren't programmers, and now they can churn out AI generated stuff that they couldn't before. So to them, this is a magical new ability. Where as people who are already adept at their craft just see the slop. Same thing in other areas. In the before-times, you had to painstakingly handcraft your cat memes. Now a bot comes along and allows someone to make cat memes they didn't bother with before. But the real artisan cat memeists just roll their eyes.


AI is better than you at what you aren’t very good at. But once you are even mediocre at doing something you realize AI is wrong / pretty bad at doing most things and every once in awhile makes a baffling mistake.

There are some exceptions where AI is genuinely useful, but I have employees who try to use AI all the time for everything and their work is embarrassingly bad.


>AI is better than you at what you aren’t very good at.

Yes, this is better phrased.


> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

Much of this boils down to people simply not understanding what’s really happening. Most people, including most software developers, don’t have the ability to understand these tools, their implications, or how they relate to their own intelligence.

> Edit: lol this forum :)

Indeed.


I’ve been an engineer for 20 years, for myself, small companies, and big tech, and now working for my own saas company.

There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems. Maybe AI isn’t the right tool for the job, but that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction. Either way, the market is going to punish them accordingly.


Punishment eh? Serves them right for being skeptical.

I've been around long enough that I have seen four hype cycles around AI like coding environments. If you think this is new you should have been there in the 80's (Mimer, anybody?), when the 'fourth generation' languages were going to solve all of our coding problems. Or in the 60's (which I did not personally witness on account of being a toddler), when COBOL, the language for managers was all the rage.

In between there was LISP, the AI language (and a couple of others).

I've done a bit more than looking at this and saying 'huh, that's interesting'. It is interesting. It is mostly interesting in the same way that when you hand an expert a very sharp tool they can probably carve wood better than with a blunt one. But that's not what is happening. Experts are already pretty productive and they might be a little bit more productive but the AI has it's own envelope of expertise and the closer you are to the top of the field the smaller your returns in that particular setting will be.

In the hands of a beginner there will be blood all over the workshop and it will take an expert to sort it all out again, quite possibly resulting in a net negative ROI.

Where I do get use out of it: to quickly look up some verifiable fact, to tell me what a particular acronym stands for in some context, to be slightly more functional than wikipedia for a quick overview of some subfield (but you better check that for gross errors). So yes, it is useful. But it is not so useful that competent engineers that are not using AI are failing at their job, and it is at best - for me - a very mild accelerator in some use cases. I've seen enough AI driven coding projects strand hopelessly by now to know that there are downsides to that golden acorn that you are seeing.

The few times that I challenged the likes of ChatGPT with an actual engineering problem to which I already knew the answer by way of verification the answers were so laughably incorrect that it was embarrassing.


I'm not a big llm booster, but I will say that they're really good for proof of concepts, for turning detailed pseudocode into code, sometimes for getting debugging ideas. I'm a decade younger than you, but I've programmed in 4GLs (yuch), lived through a few attempts at visual programming (ugh), and ... LLM assistance is different. It's not magic and it does really poorly at the things I'm truly expert at, but it does quite well with boring stuff that's still a substantial amount of programming.

And for the better. I've honestly not had this much fun programming applications (as opposed to students stuff and inner loops) in years.


> but it does quite well with boring stuff that's still a substantial amount of programming.

I'm happy that it works out for you, and probably this is a reflection of the kind of work that I do, I wouldn't know how to begin to solve a problem like designing a braille wheel or a windmill using AI tools even though there is plenty of coding along the way. Maybe I could use it to make me faster at using OpenSCAD but I am never limited by my typing speed, much more so by thinking about what it is that I actually want to make.


I've used it a little for openscad with mixed results - sometimes it worked. But I'm a beginner at openscad and suspect if I were better it would have been faster to just code it. It took a lot of English to describe the shape - quite possibly more than it would have taken to just write in openscad. Saying "a cube 3cm wide by 5cm high by 2cm deep" vs cube([5, 3, 2]) ... and as you say, the hard part is before the openscad anyway.


OpenSCAD has a very steep learning curve. The big trick is not to think sequentially but to design the part 'whole'. That requires a mental switch. Instead of building something and then adding a chamfered edge (which is possible, but really tricky if the object is complex enough) you build it out of primitives that you've already chamfered (or beveled). A strategic 'hull' here and there to close the gaps helps a lot.

Another very useful trick is to think in terms of vertices of your object rather than the primitives creates by those vertices. You then put hulls over the vertices and if you use little spheres for the vertices the edges take care of themselves.

This is just about edges and chamfers, but the same kind of thinking applies to most of OpenSCAD. If I compare how productive I am with OpenSCAD vs using a traditional step-by-step UI driven cad tool it is incomparable. It's like exploratory programming, but for physical objects.


> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

"There's not much there" is a totally valid critique of a lot of the current AI ecosystem. How many startups are simple prompt wrappers on top of ChatGPT? How many AI features in products are just "click here to ask Rovo/Dingo/Kingo/CutesyAnthropomorphizedNameOfAI" text boxes that end up spitting out wrong information?

There's certainly potential but a lot of the market is hot air right now.

> Either way, the market is going to punish them accordingly.

I doubt this, simply because the market has never really punished people for being less efficient at their jobs, especially software development. If it did, people proficient in vim would have been getting paid more than anyone else for the past 40 years.


IMO if the market is going to punish anyone it’s the people who, today, find that AI is able to do all their coding for them.

The skeptics are the ones that have tried AI coding agents and come away unimpressed because it can’t do what they do. If you’re proudly proclaiming that AI can replace your work then you’re telling on yourself.


> If you’re proudly proclaiming that AI can replace your work then you’re telling on yourself.

That's a very interesting observation. I think I'm safe for now ;)


> it can’t do what they do

That's asking the wrong question, and I suspect coming from a place of defensiveness, looking to justify one's own existence. "Is there anything I can do that the machine can't?" is the wrong question. "How can I do more with the machine's help?" is the right one.


What's "there" though is that despite being wrappers of chat gpt, the product itself is so compelling that it's essentially got a grip on the entire american economy. That's why everyone's crabs in a bucket about it, there's something real that everyone wants to hitch on to. People compare crypto or NFTs to this in terms of hype cycle, but it's not even close.


>there's something real that everyone wants to hitch on to.

Yeah, stock prices, unregulated consolidation, and a chance to replace the labor market. Next to penis enhancement, it's a CEO's wet dream. They will bet it all for that chance.

Granted, I think its hastiness will lead to a crash, so the CEO's played themselves short term.


Sure, but under it all there's something of value... that's why it's a much larger hype wave than dick pills


> simply because the market has never really punished people for being less efficient at their jobs

In fact, it tends to be the opposite. You being more efficient just means you get "rewarded" with more work, typically without an appropriate increase in pay to match the additional work either.

Especially true in large, non-tech companies/bureaucratic enterprises where you are much better off not making waves, and being deliberately mediocre (assuming you're not a ladder climber and aren't trying to get promoted out of an IC role).

In a big team/org, your personal efficiency is irrelevant. The work can only move as fast as the slowest part of the system.


This is very true. So you can't just ask people to use AI and expect better output even if AI is all the hype. The bottlenecks are not how many lines of code you can produce in a typical big team/company.

I think this means a lot of big businesses are about to get "disrupted" because small teams can become more efficient because for them sheer generation of somtimes boilerplate low quality code is actually a bottleneck.


Sadly capitalism rewards scarcity at a macro level, which in some ways is the opposite of efficiency. It also grants "social status" to the scarce via more resources. As long as you aren't disrupted, and everyone in your industry does the same/colludes, restricting output and working less usually commands more money up to a certain point (prices are set more as a monopoly in these markets). Its just that scarcity was in the past correlated with difficulty which made it "somewhat fair" -> AI changes that.

Its why unions, associations, professional bodies, etc exist for example. This whole thread is an example -> the value gained from efficiency in SWE jobs doesn't seem to be accruing value to the people with SWE skills.


I think part of this is that there is no one AI and there is no one point in time.

The other day Claude Code correctly debugged an issue for me, that was seen in production, in a large product. It found a bug a human wrote, a human reviewed, and fixed it. For those interested the bug had to do with chunk decoding, the author incorrectly re-initialized the decoder in the loop for every chunk. So single chunk - works. >1 chunk fails.

I was not familiar with the code base. Developers who worked on the code base spent some time and didn't figure out what was going on. They also were not familiar with the specific code. But once Claude pointed this out that became pretty obvious and Claude rewrote the code correctly.

So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.

That said, it does not handle all tasks with the same consistency. Some things it can really mess up. So you need to learn what it does well and what it does less well and how and when to interact with it to get the results you want.

It is automation on steroids with near human (lessay intern) capabilities. It makes mistakes, sometimes stupid ones, but so do humans.


>So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.

If the stories were more like this where AI was an aid (AKA a fancy auto complete), devs would probably be much more optimistic. I'd love more debugging tools.

Unfortunately, the lesson an executive here would see is "wow AI is great! fire those engineers who didn't figure it out". Then it creeps to "okay have AI make a better version of this chunk decoder". Which is wrong on multiple levels. Can you imagine if the result for using Intellisense for the first time was to slas your office in half? I'd hate autocomplete too?


> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

I would argue that the "actual job" is simply to solve problems. The client / customer ultimately do not care what technology you use. Hell, they don't really care if there's technology at all.

And a lot of software engineers have found that using an LLM doesn't actually help solve problems, or the problems it does solve are offset by the new problems it creates.


Again, AI isn’t the right tool for every job, but that’s not the same thing as a shallow dismissal.


What you described isn't a shallow dismissal. They tried it, found it to not be useful in solving the problems they face, and moved on. That's what any reasonable professional should do if a tool isn't providing them value. Just because you and they disagree on whether the tool provides value doesn't mean that they are "failing at their job".


It is however much less of a shallow dismissal of a tool than your shallow dismissal of a person, or in fact a large group of persons.


Or maybe it indicates that the person looking at the LLM and deciding there’s not much there knows more than you do about what they are and how they work, and you’re the one who’s wrong about their utility.


>To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems

This feels like a mentality of "a solution trying to find a problem". There's enough actual problems to solve that I don't need to create more.

But sure, the extension of this is "Then they go home and research more usages and see a kerfluffle of legal, community, and environmental concerns". Then decides to not get involved in the politics".

>Either way, the market is going to punish them accordingly.

If you want to punish me because I gave evaluations you disagreed with, you're probably not a company I want to work for. I'm not a middle manager.


It really depends on what you’re doing. AI models are great at kind of junior programming tasks. They have very broad but often shallow knowledge - so if your job involves jumping between 18 different tools and languages you don’t know very well, they’re a huge productivity boost. “I don’t write much sql, or much Python. Make a query using sqlalchemy which solves this problem. Here’s our schema …”

AI is terrible at anything it hasn’t seen 1000 times before on GitHub. It’s bad at complex algorithmic work. Ask it to implement an order statistic tree with internal run length encoding and it will barely be able to get off the starting line. And if it does, the code will be so broken that it’s faster to start from scratch. It’s bad at writing rust. ChatGPT just can’t get its head around lifetimes. It can’t deal with really big projects - there’s just not enough context. And its code is always a bit amateurish. I have 10+ years of experience in JS/TS. It writes code like someone with about 6-24 months experience in the language. For anything more complex than a react component, I just wouldn’t ship what it writes.

I use it sometimes. You clearly use it a lot. For some jobs it adds a lot of value. For others it’s worse than useless. If some people think it’s a waste of time for them, it’s possible they haven’t really tried it. It’s also possible their job is a bit different from your job and it doesn’t help them.


> that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction

Or, and stay with me on this, it’s a reaction to the actual experience they had.

I’ve experimented with AI a bunch. When I’m doing something utterly formulaic it delivers (straightforward CRUD type stuff, or making a web page to display some data). But when I try to use it with the core parts of my job that actually require my specialist knowledge they fall apart. I spend more time correcting them than if I just write it myself.

Maybe you haven’t had that experience with work you do. But I have, and others have. So please don’t dismiss our reaction as “fear based” or whatever.


> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

To me, any software engineer who tries Haskell, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

To me, any software engineer who tries Emacs, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

To me, any software engineer who tries FreeBSD, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

We're getting paid to solve the problem, not to play with the shiniest newest tools. If it gets the job done, it gets the job done.


> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

I have solved more problems with tools like sed and awk, you know, actual tools, more than I’ve entered tokens into an LLM.

Nobody seemed to give a fuck as long as the problem was solved.

This it getting out of hand.


Just because you can solve problems with one class of tools doesn’t mean another class is pointless. A whole new class of problems just became solvable.


> A whole new class of problems just became solvable.

This is almost by definition not really true. LLMs spit out whatever they were trained on, mashed up. The solutions they have access to are exactly the ones that already exist, and for the most part those solutions will have existed in droves to have any semblance of utility to the LLM.

If you're referring to "mass code output" as "a new class of problem", we've had code generators of differing input complexity for a very long time; it's hardly new.

So what do you really mean when you say that a new class of problems became solvable?


But sed and awk are problems.


I would've thought that in 20 years you would have met other devs who do not think like you?

something I enjoy about our line of work is there are different ways to be good at it, and different ways to be useful. I really enjoy the way different types of people make a team that knows its strengths and weaknesses.

anyway, I know a few great engineers who shrug at the agents. I think different types of thinker find engagement with these complex tools to be a very different experience. these tools suit some but not all and that's ok


This is the correct viewpoint (in my opinion, of course). There are many ways that lead to a solution, some are better, some are worse, some are faster, some much slower. Different tools and different strokes for different folks and if it works for you then more power to you. That doesn't mean you get to discard everybody for whom it does not work in exactly the same way.

I think a big mistake junior managers make is that they think that their nominal subordinates should solve problems the way that they would solve them, without recognizing that there are multiple valid paths and that it doesn't so much matter which path is chosen as long as the problem is solved on time and within the allocated budget.


I use AI all the time, but the only gain they have is better spelling and grammar than me. Spelling and grammar has long been my weak point. I can write the same code they write just as fast without - typing has never been the bottleneck in writing code. The bottleneck is thinking and I still need to understand the code AI writes since it is incorrect rather often so it isn't saving any effort, other than the time to look up the middle word of some long variable name.


My dismissal I think indicates exhaustion from the additional work I’d need to do to make an LLM write my code, annoyance at its inaccuracies, and disgust at the massive scam and grift that is the LLM influencers.

Writing code via a LLM feels like writing with a wet noodle. It’s much faster and write what I mean, myself, with the terse was and precision of my own thought.


> with the terse was and precision of my own thought

Hehe. So much for precision ;)


Autocorrected “terse-ness”


Autocorrect is my nemesis. And I suspect it has teamed up with email address completion.


I mean, this is the other extreme to the post being replied to (either you think it's useless and walk away, or you're failing at your job for not using it)

I personally use it, I find it helpful at times, but I also find that it gets in my way, so much so it can be a hindrance (think losing a day or so because it's taken a wrong turn and you have to undo everything)

FTR The market is currently punishing people that DO use it (CVs are routinely being dumped at the merest hint of AI being used in its construction/presentation, interviewers dumping anyone that they think is using AI for "help", code reviewers dumping any take home assignments that have even COMMENTS massaged by AI)


> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job,

I don't understand why people seem so impatient about AI adoption.

AI is the future, but many AI products aren't fully mature yet. That lack of maturity is probably what is dampening the adoption curve. To unseat incumbent tools and practices you either need to do so seamlessly OR be 5-10x better (Only true for a subset of tasks). In areas where either of these cases apply, you'll see some really impressive AI adoption. In areas where AI's value requires more effort, you'll see far less adoption. This seems perfectly natural to me and isn't some conspiracy - AI needs to be a better product and good products take time.


> I don't understand why people seem so impatient about AI adoption.

We're burning absurd, genuinely farcical amounts of money on these tools now, so of course they're impatient. There's Trillions (with a "T") riding on this massive hypewave, and the VCs and their ilk are getting nervous because they see people are waking up to the reality that it's at best a kinda useful tool in some situations and not the new God that we were promised that can do literally everything ever.


Well that's capital's problem. Don't make it mine!


Well said!


In European consulting agencies the trend now is to make AI part of each RFP reply, you won't go through the sales team, if AI isn't crammed there as part of the solution being delivered, and we get evaluated for it.

This takes all the joy away, even traditional maintenance projects of big corps seems attractive nowadays.


I remember when everything had a have the word ‘digital’ in it. And I’m old enough to remember when ‘multimedia’ was a buzzword that was crammed into anywhere it would fit.


You know what, this clarifies something for me.

PC, Web and Smartphone hype was based on "we can now do [thing] never done before".

This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"

It all feels much more like a wealth grab for the corporations than a promise of improving a standard of living for end customers. Much closer to a Cloud or Server (replacing Mainframes) cycle.


>> This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"

I was doing RPA (robotic process automation) 8 years ago. Nobody wanted it in their departments. Whenever we would do presentations, we were told to never, ever, ever talk about this technology replacing people - it only removes the mundane work so teams can focus more on the bigger scope stuff. In the end, we did dozens and dozens of presentations and only two teams asked us to do some automation work for them.

The other leaders had no desire to use this technology because they were not only fearful of it replacing people on their teams, they were fearful it would impact their budgets negatively so they just quietly turned us down.

Unfortunately, you're right because as soon as this stuff gets automated and you find out 1/3rd of your team is doing those mundane tasks, you learn very quickly you can indeed remove those people since there won't be enough "big" initiatives to keep everybody busy enough.

The caveat was even on some of the biggest automations we did, you still needed a subset of people on the team you were working with to make sure the automations were running correctly and not breaking down. And when they did crash, since a lot of these were moving time sensitive data, it was like someone just stole the crown jewels and suddenly you need two war rooms and now you're ordering in for lunch.


Yes and no. PC, Web, etc advancements were also about lowering cost. It’s not that no one could do some thing, it’s that it was too expensive for most people, e.g. having a mobile phone in the 80’s.

Or hiring a mathematician to calculate what is now done in a spreadsheet.


100%.

"You should be using AI in your day to day job or you won't get promoted" is the 2025 equivalent of being forced to train the team that your job is being outsourced to.


or 'interactive' or 'cloud' (early 2010s).


Same, doesn't make this hype phase more bearable though.


> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype product

I think there is a broader dichotomy between the people-persuation-plane, and the real-world-facts plane. In the people-persuation plane, it is all about convincing someone of something, and hype plays here, and marketing, religion and political persuation too. In the real world plane, it is all about tangible outcomes, and working code or results play here, and gravity and electromagnetism too. Sometimes there is a reflex loop between the two. I chose the engineering career because, what i produce is tangible, but I realize that a lot of my work is in the people-plane.


>a broader dichotomy between the people-persuation-plane, and the real-world-facts plane

This right here is the real thing which AI is deployed to upset.

The Enlightenment values which brought us the Industrial Revolution imply that the disparity between the people-persuasion-plane and the real-world-facts-plane should naturally decrease.

The implicit expectation here is that as civilization as a whole learns more about how the universe works, people would naturally become more rational, and thus more persuadable by reality-compliant arguments and less persuadable by reality-denying ones.

That's... not really what I've been seeing. That's not really what most of us have been seeing. Like, devastatingly not so.

My guess is that something became... saturated? I'd place it sometime around the 1970s, same time Bretton Woods ended, and the productivity/wages gap began to grow. Something pertaining to the shared-culture-plane. Maybe there's only so much "informed" people can become before some sort of phase shift occurs and the driving force behind decisions becomes some vague, ethically unaccountable ingroup intuition ("vibes", yo), rather than the kind of explicit, systematic reasoning which actually is available to any human, except for the weird fact how nobody seems to trust it very much any more.


> people would naturally become more rational, and thus more persuadable by reality-compliant arguments and less persuadable by reality-denying ones.

likely not. Our natural state tuned by evolution is one of an emotional creature persuaded by pleasing rhetoric - like a bird which responds to another bird's call.


What's irrational about a bird responding to another bird's call, though?

I always figured, unlike human speech, bird song contained only truth - 100% real-time factual representation of reproductive fitness/compatibility, 0% fractal bullshitting (such as arguing about definitions of abstract notions, or endless rumination and reflection, or command hierarchies built to leak, or...).

Although who knows, really! I'm just guessing here. Maybe what we oughtta do is ask some actual ornithologists to ask an actual parrot to translate for us the songs of its distant relatives. Sounds crazy enough to work -- though probably not in captivity.

Overall I see your point, and I see many people sharing that perspective; personally, I find it rather disheartening. Tbh I'm not even sure what would be a convincing argument one way or the other.


This sounds a lot like the Marxist concept of alienation: https://en.wikipedia.org/wiki/Marx%27s_theory_of_alienation


Probably what it is, yeah. It's in the water.


I wonder if objectively Seattle got hit harder than SF in the last bust cycle. I don’t have a frame of comparison. But if the generational trauma was bigger then so too would the backlash against new bubbles.


I've never worked at Microsoft. However, I do have some experience with the company.

I worked building tools within the Microsoft ecosystem, both on the SQL Server side, and on the .NET and developer tooling side, and I spent some time working with the NTVS team at Microsoft many years ago, as well as attending plenty of Microsoft conferences and events, working with VSIP contacts, etc. I also know plenty of people who've worked at or partnered with Microsoft.

And to me this all reads like classic Microsoft. I mean, the article even says it: whatever you're doing, it needs to align with whatever the current key strategic priority is. Today that priority is AI, 12 years ago it was Azure, and on and on. And, yes, I'd imagine having to align everything you do to a single priority regardless of how natural that alignment is (or not) gets pretty exhausting, and I'd bet it's pretty easy to burn out on it if you're in an area of the business where this is more of a drag and doesn't seem like it delivers a lot of value. And you'll have to dogfood everything (another longtime Microsoft pattern) core to that priority even if it's crap compared with whatever else might be out there.

But I don't think it's new: it's simply part and parcel of working at Microsoft. And the thing is, as a strategy it's often served them well: Windows[0], Xbox, SQL Server, Visual Studio, Azure, Sharepoint, Office, etc. Doesn't always work, of course: Windows Phone went really badly, but it's striking that this kind of swing and a miss is relatively rare in Microsoft's history.

And so now, of course, they're doing it with AI. And, of course, they're a massive company, so there will be plenty of people there who really aren't having a good time with it. But, although it's far from a foregone conclusion, it would not be a surprise for Microsoft to come from behind and win by repeating their usual strategy... again.

[0] Don't overread this: I'm not necessarily saying I'm a huge fan. In fact I do think Windows, at is core, is a decent operating system, and has been for a very long time. On the back end it works well, and I have no complaints. But I viscerally despise Windows 11 as a desktop operating system. That's right: DESPISE. VISCERALLY. AT A MOLECULAR LEVEL.


I do assume that, I legitimately think it's the most important thing happening in the next decade in tech. There's going to be an incredible amount of traditional software written to make this possible (new databases, frameworks, etc.) and I think people should be able to see the opportunity, but the awful cultures in places like Microsoft are hindering this.


> But moving toward one pole moves you away from the other.

My assumption detector twigged at that line. I think this is just replacing the dichotomy with a continuum between two states. But the hype proponents always hope - and in some cases they are right - that those two poles overlap. People make and lose fortunes on placing those bets and you don't necessarily have to be right or wrong in an absolute sense, just long enough that someone else will take over your load and hopefully at a higher valuation.

Engineers are not usually the ones placing the bets, which is why they're trying to stay away from hype driven tech (to them it is neutral with respect to the outcome but in case of a failure they lose their job, so better to work on things that are not hyped, it is simply safer). But as soon as engineers are placing bets they are just as irrational as every other class of investor.


This somewhat reflects my sentiment to this article. It felt very condescending. This "self-limiting beliefs" and the implication that Seattle engineers are less than San Francisco engineers because they haven't bought into AI...well, neither have all the SF engineers.

One interesting take away from the article and the discussion is that there seem to be two kinds of engineers: those that buy into the hype and call it "AI," and those that see it for the fancy search engine it is and call it an "LLM." I'm pretty sure these days when someone mentions "AI" to me I roll my eyes. But if they say, "LLM," ok, let's have a discussion.


> often companies with real products will mix in tidbits of hype

The wealthiest person in the world relies entirely on his ability to convince people to accept hype that surpasses all reason.


> So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.

I understood “they think they can’t” to refer to the engineers thinking that management won’t allow them to, not to a lack of confidence in their own abilities.


>So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.

Spot. Fucking. On.

Thank you.


the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow. The sudden appearance of LLMs has had a really destabilizing effect on everything, and a vast portion of what LLMs can do and/or are being used for runs from intellectually stifling (using LLMs to write your term papers) to revolting (all kinds of non-artists/writers/musicians using LLM to suddenly think they are "creators" and displacing real artists, writers, and musicians) to utterly degenerate (political / sexual deepfakes of real people, generation of antivax propaganda, etc). Put on top of that the way corporate America is absolutely doing the very familiar "blockchain" dance on this and insisting everyone has to do AI all the time everywhere is a huge problem that hopefully will shake out some in the coming years.

But despite all that, for writing, refactoring, and debugging computer code, LLM agents are still completely game changing. All of these things are true at the same time. There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.


> There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.

I've tried implementing features with Claude Code Max and if I had let that go on for a week instead of just a couple of days I would've lost a week's worth of work (it was pretty immediately obvious that it was too slow at doing pretty much everything, and even the slightest interaction with the LLM caused very long round-trips that would add additional time, over and over and over again). It's possible people simply don't do the kind of things I do. On the extreme end of that, had I spent my days making CRUD apps I probably would've thought it was magic and a "game changer"... But I don't.

I actually don't have a problem believing that there are people who basically only need to write 25% of their code now; if all you're doing for work is gluing together libraries and writing boilerplate then of course an LLM is going to help with that, you're probably the 1000th person that day to ask for the same thing.

The one part I would say LLMs seem to help me with is medium-depth questions about DirectX12. Not really how to use it, but parts of the API itself. MSDN is good for learning about it, but I would concede that LLMs have been useful for just getting more composite knowledge of DX12.

P.S.:

I have found that very short completions, 1-3 lines, is a lot more productive for me personally than any kind of "generate this feature", or even function-sized generation. The reason is likely that LLMs just suck at the things I do, but they can figure out that a pattern exists in the pretty immediate context and just spit out that pattern with some context clues nearby. That remains my best experience with any and all LLM-assisted coding. I don't use it often because we don't allow LLMs for work, but I have a keybind for querying for a completion when I do side projects.


my current job /role combinations has me working in a variety of projects which feature tasks to be done in: Python/SQLAlchemy (which I maintain), Go, k8s, Ansible, Bash, Groovy, Java, Typescript, javascript, etc. If I'm doing an architecture-intensive thing in SQLAlchemy, obviously I'm not going to say "Claude here go do this feature for me". I will have it do things like write change notes (where I'll write out the changelog in the convoluted and overly technical way I can do in 10 seconds, and it produces something presentable and readable from it), set up test cases, and sometimes I will give it very specific instructions for a large refactoring that has a predictable pattern (basically, instead of me figuring out a complex search and replace or doing it manually). For stuff I do in Ansible and especially Groovy (a horrible language which heavily resists being lintable), these are very simple declarative playbooks or Jenkins pipeline jobs, I use Claude heavily to write out directives and such because it will do so without syntax errors and without me having to google every individual pattern or directive; it's much easier to check what it writes and debug from there. But I'm also not putting Claude in charge in these places, it's doing the boring stuff for me and doing it a lot faster and without my having to spend cognitive overhead (which is at a premium when you're in your late 50s like me).

> The one part I would say LLMs seem to help me with is medium-depth questions about DirectX12. Not really how to use it, but parts of the API itself. MSDN is good for learning about it, but I would concede that LLMs have been useful for just getting more composite knowledge of DX12.

see there you go, I have things like this I have to figure out many times per week. so many of them are one-off things I really dont need to learn deeply at the moment (like TypeScript). It's also very helpful to bounce off ideas, like when I need to achieve something in the Go/k8s realm, it can sanity check how I'm approaching a problem and often suggest other ways that I would not have considered (which it knows because it's been trained on millions of tech blogs).


> the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow.

My company is basically writing blank cheques for "AI" (aka LLM, I hate the way we've poisoned AI as a term))tooling so that people can use any and all tooling they want and see what works and doesn't. This is a company with ~1500ish engineers, ranging from hardware engineers building POS devices to the junior frontenders building out our simplest UIs. There's also a whole lot more people who aren't technical, and they're also encouraged to use any and all AI tooling they can.

Despite the entire company trying to figure out how to use these effectively precisely because we're trying to look at things objectively and separate out the hype from the reality, the only people I've seen with any kind of praise so far (and this has been going on since the early ChatGPT days) have been people in Marketing and Sales, because for them it doesn't matter if the AI hallucinates some pure bullshit since that's 90% of their job anyway.

We have spent god knows how much time and resources trying to get these tools doing anything more useful than simple demos that get thrown out immediately, and it's just not there. No one is pushing 100x the code or features they were before, projects aren't finishing any faster than they were before, and nobody even bothers turning on the meeting transcription tools either anymore because more often than not it'll interpret things said in the meeting just plain wrong or even make up entire discussion points that were never had.

Just recently, like last week recently, we had some idiotic PR review bot from coderabbit or some other such company be activated. I've never seen so many people complain all at once on Slack, there was a thread with hundreds of individuals all saying how garbage it was and how much it was distracting from reviews. I didn't see a single person say they liked the tool, not 1 single person had anything good to say about it.

So as far as I'm concerned, it's just a MASSIVE fucking hype bubble that will ultimately spawn some tooling that is sorta useful for generating unimportant scripts, but little else.


never give an LLM to your junior engineers. The LLM itself is mostly like a junior engineer and will make a complete mess of things if not guided by someone with a lot of experience.

Basically if people are producing code or documentation that looks like an LLM wrote it, that's not really what I see as the model that makes these tools useful.


The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions. It’s sad, I always thought of my fellow engineers as more open-minded.


> The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions.

so, people with experience?


Obviously. Turns out experience can be self-limiting in the face of paradigm-shifting innovation.

In hindsight it makes sense, I’m sure every major shift has played out the same way.


> Turns out experience can be self-limiting in the face of paradigm-shifting innovation.

It also turns out that experience can be what enables you to not waste time on trendy stuff which will never deliver on its promises. You are simply assuming that AI is a paradigm shift rather than a waste of time. Fine, but at least have the humility to acknowledge that reasonable people can disagree on this point instead of labeling everyone who disagrees with you as some out of touch fuddy-duddy.


I'm not assuming anything, I'm relying on my own experience of being an engineer for two decades and building stuff for all kinds of organizations in all kinds of stacks and languages. AI has radically increased my velocity and quality, though it's got a steep learning curve of its own, and many frustrations to deal with. But it's pretty obviously a paradigm shift, and not "trendy stuff which will never deliver on its promises". Even if the current LLMs never improve at all from here, they're still incredibly useful tools.


ive been programming for more than 40 years


Do you mean people who have been through several hype cycles and know their nature and how new novel tech, no matter how useful, takes time to be integrated and understood?

Get over yourself, and try to tone down the bigotry and stereotyping.


Bitcoin is at 93k so I don’t think it’s entirely accurate to say blockchain is insubstantive or without value


There can be a bunch of crazy people trading each other various lumps of dog feces for increasing sums of cash, that doesn't mean dogshit is particularly valuable or substantive either.


I'd argue even dogshit has more practical use than Bitcoin, if no one paid money for Bitcoin. You can throw it for self-defence, compost it (under high heat to kill the germs), put it on your property to scare away raccoons (it works sometimes).


Bitcoin and other crypto coins have a practical use. You can use them to buy whatever is being sold on the darkweb with the main product categories being drugs and guns. I honestly believe much of the valuation of Crypto is tied to these marketplaces.


Don't forget scamming people out of billions of dollars of their hard earned life savings.


And by "dog feces," I assume you mean fiat currency, correct?

Cryptocurrency solves the money-printing problem we've had around the world since we left the gold standard. If governments stopped making their currencies worthless, then bitcoin would go to zero.


This seems to be almost purely bandwagon value, like preferring Coca-Cola to some other drink. There are other blockchains that are better technically along a bunch of dimensions, but they don't have the mindshare.

Bitcoin is probably unkillable. Even if were to crash, it won't be hard to round up enough true believers to boost it up again. But it's technically stagnant.


True, but then so is a lot of "tech". There were certainly, at least equivalent, social applications before and all throughout Facebooks dominance, but like Bitcoin the network effect becomes primary, after a minimum feature set.


For Bitcoin, it doesn't exactly seem to be a network effect? It's not like choosing a chat app because that's what your friends use.

Many other cryptocurrencies are popular enough to be easily tradable and have features to make them work better for trade. Also, you can speculate on different cryptocurrencies than your friends do.


Technically stagnant is a good thing; I'd prefer the term technically mature. It's accomplished what it set out to do, which is to be a decentralized, anonymous form of digital currency.

The only thing that MIGHT kill it is if governments stopped printing money.


Beanie Babies were trading pretty well, too, although it wasn't quite "solving sudokus for drugs", so I guess that's why they didn't have as much staying power.


very little of the trading actually happens on the blockchain, it's only used to move assets between trading venues.

The values of bitcoin are:

- easy access to trading for everyone, without institutional or national barriers

- high leverage to effectively easily borrow a lot of money to trade with

- new derivative products that streamline the process and make speculation easier than ever

The blockchain plays very little part in this. If anything it makes borrowing harder.


I agree with "easy access to trading for everyone, without institutional or national barriers"

how on earth does bitcoin have anything to do with borrowing or derivatives?

in a way that wouldn't also work for beanie babies


Those are the main innovations tied to crypto trading. They do indeed have little to do with the blockchain or bitcoin itself, and do apply to any asset.

There are actually several startups whose pitch is to bring back those innovations to equities (note that this is different from tokenized equities).


If you can't point to real use cases at scale, it's hard to argue it has intrisinc value even though it may have speculative value.


With almost zero fundamentals. That’s the part you are glossing over.


Uh… So the argument here is that anticipated future value == meaningful value today?

The whole cryptocurrency world requires evangelical buy-in. But there is no directly created functional value other than a historic record of transactions and hypothetical decentralization. It doesn’t directly create value. It’s a store of it - again, assuming enough people continue to buy into the narrative so that it doesn’t dramatically deflate when you need to recover your assets. States and other investors are helping make stability happen to maintain it as a value store, but you require the story propagating to achieve those ends.


You’re wasting your breath. Bitcoin will be at a million in 2050 and you’ll still get downvoted here for suggesting it’s anything other than a stupid bubble that’s about to burst any day now.


Quite the opposite, if you need to defend a technical idea with its price in the in a largely speculative market, you've already lost the argument.

That people are greedy and ignorant and bid up BTC doesn't prove anything about its value.


It’s hard to understand how people can be so determined to ignore reality.


> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money).

AI is not both of these things? There are no real AI products that have real customers and make money by giving people what they need?

> LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated.

What do you view as the potential that’s been stated?


Not OP but for starters LLMs != AI

LLMs are not an intelligence, and people who treat them as if they are infallible Oracles of wisdom are responsible for a lot of this fatigue with AI


>Not OP but for starters LLMs != AI

Please don't do this, make up your own definitions.

Pretty much anything and everything that uses neural nets is AI. Just because you don't like how the definition has been since the beginning doesn't mean you get to reframe it.

In addition, if humans are not infallible oracles of wisdom, they wouldn't be an intelligence in your definition.


Why then there is an AI-powered dishwasher, but no AI car?


https://www.tesla.com/fsd ?

I also don't understand the LLM ⊄ AI people. Nobody was whining about pathfinding in video games being called AI lol. And I have to say LLMs are a lot smarter than A*.


Cannot find any mention of AI there.

Also it's funny how they add (supervised) everywhere. It looks like "Full self driving (not really)"


Yes one needs some awareness of the technology. Computer vision: unambiguously AI, motion planning: there are classical algorithms but I believe tesla / waymo both use NNs here too.

Look I don't like the advertising of FSD, or musk himself, but we without a doubt have cars using significant amounts of AI that work quite well.


None of those things contain actual intelligence. On that basis any software is "intelligent". AI is the granddaddy of hype terms, going back many decades, and has failed to deliver and LLMs will also fail to deliver.


It's because nobody was trying to take video game behavior scripts and declare them the future of all things technology.


Ok? I'm not going to change the definition of a 70 year old field because people are annoyed at chatgpt wrappers.


A way to state this point that you may find less uncharitable is that a lot of current LLM applications are just very thin shells around ChatGPT and the like.

In those cases the actual "new" technology (ie, not the underlying ai necessarily) is not as substantive and novel (to me at least) as a product whose internals are not just an (existing) llm.

(And I do want to clarify that, to me personally, this tendency towards 'thin-shell' products is kind of an inherent flaw with the current state of ai. Having a very flexible llm with broad applications means that you can just put Chatgpt in a lot of stuff and have it more or less work. With the caveat that what you get is rarely a better UX than what you'd get if you'd just prompted an llm yourself.

When someone isn't using llms, in my experience you get more bespoke engineering. The results might not be better than an llm, but obviously that bespoke code is much more interesting to me as a fellow programmer)


Yes ok then I definitely agree


Shells around chatgpt are fine if they provide value.

Way better than AI jammed into every crevice for no reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: