Oh I should add that team adoption is mixed. A lot of folks don’t seem to see the value, or they don’t lean in very hard, or take the time to study the tools capabilities.
We also have now to deal with the issue of really well-written PR messages and clean code that doesn’t do the right thing. It used to be that those things were proxies for quality. Better this way anyhow: code review focuses on if it’s really doing what we need. (Often engineers miss the detail and go down rabbit holes that I call “co-hallucination” as it is not really an AI error, but rather an emergent property.)
To summarize, other people are having to meticulously check the AI slop you're slinging into the system that looks good, but doesn't even do what its supposed to do. And you didn't even check it before submitting the PR?
We also have now to deal with the issue of really well-written PR messages and clean code that doesn’t do the right thing. It used to be that those things were proxies for quality. Better this way anyhow: code review focuses on if it’s really doing what we need. (Often engineers miss the detail and go down rabbit holes that I call “co-hallucination” as it is not really an AI error, but rather an emergent property.)