The simple solution is for DAs to refuse to prosecute any crimes where the police reports were written with AI and judges to refuse to allow any evidence written or compiled by or with the assistance of AI.
The "simple solution" would be to make the use of AI illegal in the criminal justice system. Judges and prosecutors can't (or at least shouldn't be allowed to) simply refuse to prosecute crimes which have otherwise been legally processed and presented.
It is impossible to definitively identify AI or eliminate generative ai use in any given piece of writing. There are detectors out there but they all have high levels of false positives. And unknown levels of false negatives.
Why? There are at least 5 common generative ai services (which display different behaviors depending on how they’re prompted and what has previously been in the context), there are hundreds of thousands of open models, millions of different ways you could set up retrieval augmented generation, infinite ways you can prompt.
That said
It is quite easy (for professors for example) to pick up on common linguistic and structural patterns. (Chat gpt uses certain words more commonly, structures arguments in a particular way, uses and misuses the same metaphorical structures)