ML and other automated systems are not new, and we know enough about automated systems to come up with regulations like "no, you should not use these in a certain set of specific circumstances" or "if you're unleashing this onto the world, you have to show that you understand what you're doing" etc.
Let's not be overly pedantic and overly Pius on petty semantics like that. It was clear from my original comment, the context of what I was talking about.
E.g. "if a decision cannot be explained by a human, it should bot be done by a machine" applies to them, too.
Basically, if you read the EU AI Act for example, it's hard to find anything you'd disagree with regardless of whether it's about ML, LLMs or three if statements in a trench coat.
Of course the industry is up in arms about it (just like GDPR)