Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if this is a tactic so the court to deems this lawyer incompetent rather than giving the (presumably much harsher) penalty for deliberately lying to the court?


I don't think the insanity plea works out well for lawyers. I'm not sure if "I'm too stupid to be a lawyer" is that much better than "I lied to the courts".


This explanation is a cause of an expansion of the scope of the show cause order for the lawyer to additional bases for sanctions, as well as its expansion to the other involved lawyer and their firm, so if it was a strategic narrative, it backfired spectacularly already.


Why assume malice? Asking ChatGPT to verify is exactly what someone who trusts ChatGPT might do.

I'm not surprised this lawyer trusted ChatGPT too much. People trust their lives to self driving cars, trust their businesses to AI risk models, trust criminal prosecution to facial recognition. People outside the AI field seem to be either far too trusting or far too suspicious of AI.


Quoted directly from my last session with ChatGPT mere seconds ago:

> Limitations

May occasionally generate incorrect information

May occasionally produce harmful instructions or biased content

Limited knowledge of world and events after 2021

---

A lawyer who isn't prepared to read and heed the very obvious warnings at the start of every ChatGPT chat isn't worth a briefcase of empty promises.

WARNING: witty ending of previous sentence written with help from ChatGPT.


I agree the lawyer shouldn't have trusted ChatGPT, but I'm not comfortable with the idea that the lawyer bears all the responsibility for using ChatGPT and Microsoft/OpenAI bear no responsibility for creating it.

"May occasionally generate incorrect information" is not a sufficient warning. Even Lexis-Nexis has a similar warning: "The accuracy, completeness, adequacy or currency of the Content is not warranted or guaranteed."

And in any case, it seems like you agree with me that the lawyer was incompetent rather than malicious.


Maybe be a long run tactic to prevent future clients from switching to ChatGPT based solutions.


I mean...he doesn't have to say it: he is clearly incompetent!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: