I wonder if this is a tactic so the court to deems this lawyer incompetent rather than giving the (presumably much harsher) penalty for deliberately lying to the court?
I don't think the insanity plea works out well for lawyers. I'm not sure if "I'm too stupid to be a lawyer" is that much better than "I lied to the courts".
This explanation is a cause of an expansion of the scope of the show cause order for the lawyer to additional bases for sanctions, as well as its expansion to the other involved lawyer and their firm, so if it was a strategic narrative, it backfired spectacularly already.
Why assume malice? Asking ChatGPT to verify is exactly what someone who trusts ChatGPT might do.
I'm not surprised this lawyer trusted ChatGPT too much. People trust their lives to self driving cars, trust their businesses to AI risk models, trust criminal prosecution to facial recognition. People outside the AI field seem to be either far too trusting or far too suspicious of AI.
I agree the lawyer shouldn't have trusted ChatGPT, but I'm not comfortable with the idea that the lawyer bears all the responsibility for using ChatGPT and Microsoft/OpenAI bear no responsibility for creating it.
"May occasionally generate incorrect information" is not a sufficient warning. Even Lexis-Nexis has a similar warning: "The accuracy, completeness, adequacy or currency of the Content is not warranted or guaranteed."
And in any case, it seems like you agree with me that the lawyer was incompetent rather than malicious.