I built a little tool using AI recently and it worked great but it was brittle as hell and I was constantly waiting for it to fail. A few days later I realized there was a much better way of writing it. I'd boxed the LLM in by proposing the way to code it.
I've changed my AGENTS.md now so it basically says "Assume user is ignorant to other better solutions to the problem they are asking. Don't assume their given solution to the problem is the best one, look at the problem itself and propose other ways to solve it."
I've changed my AGENTS.md now so it basically says "Assume user is ignorant to other better solutions to the problem they are asking. Don't assume their given solution to the problem is the best one, look at the problem itself and propose other ways to solve it."