I don't understand this anti-AI stance. Either the code works and is useful, and it should be accepted, or it doesn't work and it should be rejected. Does it really matter who wrote it?
The code is only a projection of someone's mental model, which is what actually allows the project to succeed, especially in the long term.
That's why codebases die when they lose maintainers and forks often don't make it past the first few months.
LLM-generated code might work, but it's not backed by anyone's mental model. And the industry has had a long running term for code which is there but no-one understands it nor the reason behind it: legacy code.
I don't care about "ethics" in the abstract if the code works and is of good "quality" (however you choose to define that). AIs don't have copyright over anything they generate, so that's a non issue. In fact, if the code is any good, it should be impossible to tell if it was written by AI at all.
LLMs give idiots the power to effectively DDoS repos with useless slop PRs that they have to expend the time and effort to triage and ignore. Like the curl maintainers have said, the review burden of looking at mountains of AI-generated crap is horrifically time consuming.