Friendly reminder: There is no ghost in the machine. It is a system executing code, not a being having thoughts. Let’s admire the tool without projecting a personality onto it.
For me, that’s kind of the point. It’s similar to how the characters in a novel don’t really exist, and yet you can’t really discuss what happens in a novel without pretending that they do. It doesn’t really make sense to treat the author’s motivations and each character’s motivations as the same.
Similarly, we’re all talking to ghosts now, which aren’t real, and yet there is something there that we can talk about. There are obvious behavioral differences depending on what persona the LLM is generating text for.
I also like the hint of danger in “talking to ghosts.” It’s difficult to see how a rational adult could be in any danger from just talking, but I believe the news reports that some people who get too deep into it get “possessed.”
Consciousness is weird and nobody understands it. There is no good reason to assume that these systems have it. But there is also no good reason to rule it out.
usually either use Grok to optimize a mistral prompt, or you can use gemini to optimize a chatGPT prompt. It's best to keep those pairs of AIs and not cross streams!
Thanks, not being well versed in design I just picked a small color palette I liked and sticked to it
Approval was I think 2-3 days for Google (I had already validated the store page and opened it to preregistration a month before the final build) and a bit more than week for App Store due to some back and forth because of missing privacy policy links in some places of the app and stuff like that.
It does feel like planned obsolescence when companies like Apple limit software support for older hardware, Ubuntu run smoothly on much older devices. They could certainly do better by extending support and focusing on sustainability.
Exactly, token per dollar rates are useful, but without knowing the typical input output token distribution for each model on this specific task, the numbers alone don’t give a full picture of cost.
That’s how they lie to us. Companies can advertise cheap prices to lure you in but they know very well how many tokens you’re going to use on average so they will still make more profit than ever, especially if you’re using any kind of reasoning model which is just like a blank check for them to print money.
Vibe Coding is accelerating the death of documentation and architectural clarity.
Companies are measuring success by tokens generated and time-to-prototype, ignoring the massive, hidden cost of cleanup/maintenance.
The real skill is guiding generation carefully so the generated software isn’t crap. Some people here see Claude code and think it’s state of the art, whereas for best results you need a much more involved process.
It isn’t that different from any other form of engineering, really. Minimize cost, fulfill requirements; smarts-deficient folks won’t put maintainability in their spec and will get exactly what they asked for.
I bought into that idea a month or two ago, that more control and detailed instructions would deliver a clean result. That just led me down a rabbit hole of endless prompt re-runs and optimization loops. Many time I thought I had the final, perfect prompt, the next iteration slightly worsened the output. And sometimes the output was the same.
The last 20-30% of precision is brutal. The time and tokens we burn trying to perfect a prompt is simply not an optimal use of engineering hours. The problem is simple: Companies prioritize profit over the optimal solution, and the initial sales pitch was about replacement then it changed now its all about speed. I'm not making a case against AI or LLMs; I'm saying the current workflow, a path of least resistance means we are inevitably progressing toward more technical debt and cleanup at our hands.
Probably valid sunk cost fallacy, but it makes me wonder what will happen to the applications and systems being built on top of LLMs? If we face limitations or setbacks, will these innovations survive, or could we see a backlash against all thinking machines, reminiscent of Isaac Asimov's cautionary tales?