I'm tired of people telling me that llms are bad at building software without trying to sit down, learn how to properly use claude code, when to use it and learn when you shouldn't use it.
This is not a you're-holding-the-llm-wrong problem though, AI tools are simply not capable of creating mental models for problem solving. Sounds like you're tired of hearing that LLMs are not a silver bullet.
Exactly, there's things you aren't shouldn't do with an llm. But generating helm charts, configs, action workflows, building specs and then implementing based on them? Simply a no-brainer.
LLMs are as good as the inputs a person gives them.
Right now the scene is very polarized. You have the "AI is a failure, you can build anything serious, this bubble is going to pop any day now" camp, and the "AI has revolutionized my workflow, I am now 10x more productive" camp.
I mean these types of posts blow up here every. single. day.
Cursor is a joke tho, windsurf is pretty okay.