Exactly. If Codex is really as good, it should have no problem porting any settings or config from the Claude setup. (and I do believe it wouldn't have much of a problem)
One use I have for seeing what exactly it is doing is to press Esc quick when I see it's confused and starts searching for some info that eg got compacted away, often going on a big quest like searching an entire large directory tree etc. What would actually wish is if it would ask me in these cases. It clearly know that it lacks info but thinks it can figure it out by itself by going on a quest and that's true but takes too long. It could just ask me. There could be some mode settings of how much I want to be involved and consulted, like just ask boldly for any factual info from me, or if I just want to step away and it should just figure everything out on its own.
I think it's more classic enshittification. Currently, as a percentage, still not many devs use it. In a few months or 1-2 years all these products will start to cater to the median developer and start to get dumbed down.
It's unclear where the car is currently from your phrasing. If you add that the car is in your garage, it says you'll need to drive to get the car into the wash.
Do you think this is a fundamentally unbridge-able limitation of LLMs? Do you know where we were just a year ago? Can you imagine that this will get better with upcoming releases? It's like when Gary Marcus was confidently stating that AI (at least current paradigm) will never be able to generate an image of a horse riding an astronaut. (Or full wineglasses or arbitrary clocks).
I personally am a lot less stressed. It helped my mood a lot over the last couple of months. Less worries about forgetting things, about missing problems, about getting started, about planning and prioritizing in solo work. Much less of the "swirling mess" feeling. Context switches are simpler, less drudgery, less friction and pulling my hair out for hours banging against some dumb plumbing and gluing issue or installing stuff from github or configuring stuff on the computer.
Local models exist and the knowledge required for training them is widely available in free classes and many open projects. Yes, the hardware is expensive, but that's just how it is if you want frontier capability. You also couldn't have a state of the art mainframe at home in that era. Nor do people expect to have industrial scale stuff at home in other engineering domains.
I find working with LLMs much more fun and frictionless comprated to the drudgery of boring glue code or tracking down nongeneralizable version-specific workarounds in github issues etc. Coding LLMs let you focus on the domain of you actual problem instead of the low level stumbling blocks that just create annoyance without real learning.
Commercial ventures already had to care exactly to the extent that they are financially motivated by competition forces and by regulation.
In my experience coding agents are actually better at doing the final polish and plugging in gaps that a developer under time pressure to ship would skip.
This comment has even lower nutritional value. It's just a "dislike" with more words. You could have offered your counterarguments or if you're too tired of it but still feel you need to be heard, you could have linked to a previous comment or post of yours.
I mean, are you gonna die on a hill defending every low-quality content in HN? Because I think it’s perfectly OK to call it out so that moderators can notice and improve. You seem to think that readers have an inherent responsibility to salvage someone else’s bad article.
reply