Have you tried it? If you copy the errors back into the chat I could imagine it working quite well. Certainly you can give it contradictory instructions and it makes a decent effort at following them.
Yes, I'm subscribed to poe.com and am playing with all public models. They all suck at debugging issues with no known answers (I'm talking about typical problems every software developer, DevOps or infosec person solves every day).
You need a real ability to reason and preserve context beyond inherent context window somehow (we humans do it by keeping notes, writing emails, and filing JIRA tickets). So while this doesn't require full AGI and some form of AI might be able to do it this century, it won't be LLMs.