Hacker Newsnew | past | comments | ask | show | jobs | submit | distalx's commentslogin

This is either going to save hours… or create very educational outages.


If the agent would be able to update the model that would be educational for the model, noone else.


Friendly reminder: There is no ghost in the machine. It is a system executing code, not a being having thoughts. Let’s admire the tool without projecting a personality onto it.


For me, that’s kind of the point. It’s similar to how the characters in a novel don’t really exist, and yet you can’t really discuss what happens in a novel without pretending that they do. It doesn’t really make sense to treat the author’s motivations and each character’s motivations as the same.

Similarly, we’re all talking to ghosts now, which aren’t real, and yet there is something there that we can talk about. There are obvious behavioral differences depending on what persona the LLM is generating text for.

I also like the hint of danger in “talking to ghosts.” It’s difficult to see how a rational adult could be in any danger from just talking, but I believe the news reports that some people who get too deep into it get “possessed.”


Consciousness is weird and nobody understands it. There is no good reason to assume that these systems have it. But there is also no good reason to rule it out.


That’s the old way of thinking about it. there is a new way.


You sound as if you have grounds for certainty about this. What are they?


What tools or process do you use to optimize your prompts?


usually either use Grok to optimize a mistral prompt, or you can use gemini to optimize a chatGPT prompt. It's best to keep those pairs of AIs and not cross streams!


This looks great! I'm not a guitar enthusiast myself, but the design and color tone look very slick.

Congratulations on the launch after a year of work, and I wish you all the best with it!

Just out of curiosity, how much time did it take you to get app store approval from Apple and Google in 2025?


Thanks, not being well versed in design I just picked a small color palette I liked and sticked to it

Approval was I think 2-3 days for Google (I had already validated the store page and opened it to preregistration a month before the final build) and a bit more than week for App Store due to some back and forth because of missing privacy policy links in some places of the app and stuff like that.


It does feel like planned obsolescence when companies like Apple limit software support for older hardware, Ubuntu run smoothly on much older devices. They could certainly do better by extending support and focusing on sustainability.


Exactly, token per dollar rates are useful, but without knowing the typical input output token distribution for each model on this specific task, the numbers alone don’t give a full picture of cost.


That’s how they lie to us. Companies can advertise cheap prices to lure you in but they know very well how many tokens you’re going to use on average so they will still make more profit than ever, especially if you’re using any kind of reasoning model which is just like a blank check for them to print money.


I don’t think any of them are profitable are they? We’re in the losing money to gain market share phase of this industry.


Vibe Coding is accelerating the death of documentation and architectural clarity. Companies are measuring success by tokens generated and time-to-prototype, ignoring the massive, hidden cost of cleanup/maintenance.

The real skill is now cleanup, not generation.


The real skill is guiding generation carefully so the generated software isn’t crap. Some people here see Claude code and think it’s state of the art, whereas for best results you need a much more involved process.

It isn’t that different from any other form of engineering, really. Minimize cost, fulfill requirements; smarts-deficient folks won’t put maintainability in their spec and will get exactly what they asked for.


I bought into that idea a month or two ago, that more control and detailed instructions would deliver a clean result. That just led me down a rabbit hole of endless prompt re-runs and optimization loops. Many time I thought I had the final, perfect prompt, the next iteration slightly worsened the output. And sometimes the output was the same.

The last 20-30% of precision is brutal. The time and tokens we burn trying to perfect a prompt is simply not an optimal use of engineering hours. The problem is simple: Companies prioritize profit over the optimal solution, and the initial sales pitch was about replacement then it changed now its all about speed. I'm not making a case against AI or LLMs; I'm saying the current workflow, a path of least resistance means we are inevitably progressing toward more technical debt and cleanup at our hands.


Let me know when aerospace engineers are letting an AI build their planes for them.



That's an incredibly dishonest reading of my comment. Using ML to build simulations is different than letting an LLM build the plane for you.


How is it the death of documentation?

You can start off just with documentation and then in the process check if the code is still in line with the documentation.

You can also generate documentation from the code. Then check yourself, if it fits.


If you don't mind, could you share the link to your Reddit post? I'd love to read more about your findings.


I feel like this post might be a bit clickbaity. It presents a strong statement without much context, analysis or evidence.


Probably valid sunk cost fallacy, but it makes me wonder what will happen to the applications and systems being built on top of LLMs? If we face limitations or setbacks, will these innovations survive, or could we see a backlash against all thinking machines, reminiscent of Isaac Asimov's cautionary tales?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: