This company is either run by someone who doesn't understand the tech or is willfully fraudulent. ChatGPT and company are far from good enough to be entrusted with law. Having interacted extensively with modern LLMs, I absolutely know something like this would happen:
> Defendant (as dictated by AI): The Supreme Court ruled in Johnson v. Smith in 1978...
> Judge: There was no case Johnson v. Smith in 1978.
LLMs hallucinate, and there is absolutely no space for hallucination in a court of law. The legal profession is perhaps the closest one to computer programming, and absolute precision is required, not a just-barely-good-enough statistical machine.
Pretty sure the whole reason why DoNotPay actually exists is because defending against parking tickets didn't actually require a strong defense. The tickets were flawed automation, and their formulaic nature justified and equally formulaic response, or something to that effect. Whether the LLM was actually going to output answers directly, or just be used to drive a behavior tree or something like that, is a question I don't see answered anywhere.
That said, if it's such a catastrophically stupid idea, I'm not really sure why it had to be shot down so harshly: seems like that problem would elegantly solve itself. I assume the real reason it was shot down was out of fear that it would work well. Does anyone else have a better explanation for why there was such a visceral response?
> Does anyone else have a better explanation for why there was such a visceral response?
I can't speak for lawyers in general or what everyone's motivations would be, but my initial reaction was that it seemed like a somewhat unethical experiment. I assume the client would have agreed or represented themselves, but even there -- legal advice is tricky because it's advice -- it feels unethical to tell a person to rely on something that is very likely going to give them sub-par legal representation.
Sneaking it into a courtroom without the judge's knowledge feels a lot like a PR stunt, and one that might encourage further legal malpractice in the future.
I assume there are other factors at play, I assume many lawyers felt insulted or threatened, but ignoring that, it's not an experiment I personally would have lauded even as a non-lawyer who wishes the legal industry was, well... less of an industry. The goal of automating parts of the legal industry and improving access to representation is a good goal that I agree with. And maybe there are ways where AI can help with that, sure. I'm optimistic, I guess. But this feels to me like a startup company taking advantage of someone who's in legal trouble for a publicity stunt, not like an ethically run experiment with controls and with efforts made to mitigate harm.
Details have been scarce, so maybe there were other safety measures put in place; I could be wrong. But my understanding was that this was planned to be secret representation where the judge didn't know. And I can't think of any faster way to get into trouble with a judge then pulling something like that. Even if the AI was brilliant, it apparently wasn't brilliant enough to counsel its own developers that running experiments on judges is a bad legal strategy.
From what I've read recently, the legal profession is the one most at risk of adverse financial effects from AI. Not the court appearances nor the specialized work. But the run-of-the-mill boilerplate legal writing that is the bread and butter profit center of most first. You bet they are threatened and will push back.
Now the question is this. If an AI is doing something illegal like practicing law, how does one sanction an AI?
> Now the question is this. If an AI is doing something illegal like practicing law, how does one sanction an AI?
As far as I'm aware, no LLM has reached sentience and started taking on projects of its own volition. So it's easy - you sanction whoever ran the software for an illegal purpose or whoever marketed and sold the software for an illegal purpose.
> An AI is not a person, and therefore can't be sanctioned for practicing law - my take anyway.
"Personhood" in a legal sense doesn't necessarily mean a natural person. In this case, the company behind it is a person and is practicing law (so no pro se litigant using the company to generate legal arguments). In addition, if you want something entered into court, you need a (natural person) lawyer to do it, who has a binding ethical duty to supervise the work of his or her subordinates. Blindly dumping AI-generated work product into open court is about as clear-cut an ethical violation as you can find.
To your larger point, law firms would love to automate a bunch of paralegal and associate-level work; I've been involved in some earlier efforts to do things like automated deposition analysis, and there's plenty of precedent in the way the legal profession jumped on shepardizing tools to rapidly cite cases. Increased productivity isn't going to be reflected by partners earning any less, after all.
The legal profession is at the least risk of adverse financial effects from anything, because the people who make the laws are largely lawyers, and will shape the law to their advantage.
Automating boilerplate seems like a great use for AI if you can then have someone go over the writing and check that it's accurate.
I'd prefer that the boilerplate actually be reduced instead, but... I don't have any issue with someone using AI to target tasks that are essentially copy-paste operations anyway. I think this was kind of different.
> If an AI is doing something illegal like practicing law, how does one sanction an AI?
IANAL, but AIs don't have legal personhood, so it would be kind of like trying to sanction a hammer. I don't think that the AI was being threatened with legal action over this stunt, DoNotPay was being threatened.
In an instance where an AI just exists and is Open Source and there is no party at fault beyond the person who decides to download and use it, then as long as that person isn't violating court procedure there's probably no one to sanction? It's likely a bad move, but :shrug:.
But this comes into play with stuff like self-driving as well. The law doesn't think of AI as something that's special. If your AI drives you into the side of the wall, it's the same situation as if your back-up camera didn't beep and you backed into another car. Either the manufacturer is at fault because the tool failed, or you're at fault and you didn't have a reasonable expectation that the tool wouldn't fail or you used it improperly. Or maybe nobody's at fault because everyone (both you and the manufacturer) acted reasonably. In all of those cases, the AI doesn't have any more legal rights or masking of liability than your break pads do, it's not treated as a unique entity -- and using an AI doesn't change a manufacturer's liability around advertising.
That gets slightly more complicated with copyright law surrounding AIs, but even there, it's not that AIs are special entities that have their own legal status that can't own copyright, it's that (currently, we'll see if that precedent holds in the future) US courts rule that using an AI is not a sufficiently creative act to generate copyright protections.
> Law is different because the bar has a legally enforced monopoly on doing legal work.
I don't see how this would decrease DoNotPay's liability.
Regardless of how you feel about the bar, I don't think that changes anything about who they would sanction or why. Having a legal monopoly means they're even less likely to go along with a "the AI did it, not me" explanation than a normal market would be.
I mean, no matter what, they're not sanctioning the AI. They don't recognize the AI as a person, they recognize it as a tool that a person/organization is using to perform an action.
> Now the question is this. If an AI is doing something illegal like practicing law, how does one sanction an AI?
Its not and you don’t.
When a legal person (either a natural person or corporation) is doing something illegal like unauthorized practice of law, you sanction that person. The fact that they use an AI as a key tool in their unauthorized law practice is not particularly significant, legally.
I'm going to sit on that particular hill and see what happens. Even if DoNotPay's AI is not ready to do the job, the idea that AI could one day argue the law by focusing on logic and precedent instead of circumstance and interpretation is exceedingly threatening to a lawyer's career. No offense intended to the lawyers out there, of course. Were I in your shoes, I'd feel a bit fidgity over this, too.
i feel like lawyers will be able to legally keep AI out of their field for a while yet. they have the tools at their disposal to do so and a huge incentive.
> i feel like lawyers will be able to legally keep AI out of their
field for a while yet. they have the tools at their disposal to do
so and a huge incentive, other fields like journalism not so much.
That was my initial response too.
Artists, programmers, musicians, teachers are threatened... but shrug
and say "that's the future, what can you do". If lawyers feel
"threatened" by AI, they get it shot down.
I suddenly have a newfound respect for lawyers :)
Yet if we think about it, we all have exactly the same tools at our
disposal - which is just not playing that game. Difference is, while
most professions have got used to rolling with whatever "progressive
technology" is foisted on us, lawyers have a long tradition of caution
and moderating external pressure to "modernise". I'm not sure
Microsoft have much influence in the legal field.
When you're poor you have the choice between an AI that may work or you'll be defending yourself.
Access to legal assistance is almost as unobtainable as a dentist these days.
> When you're poor you have the choice between an AI that may work or you'll be defending yourself.
This is a thing that lots of people say about unethical businesses, and I'm a little skeptical about it at this point. A couple of objections I have:
- You have a constitutional right to legal representation when accused of a crime by the US government, and while we don't to abandon people who are suffering now because of some theoretical future fix, we also don't want to normalize the idea that constitutional rights only exist when a private market accommodates them. That's explicitly a bad direction for the country to go.
- Saying "well, this works here and now, and people don't have access to anything better" is in my mind only a really effective argument when we know that the thing here and now actually works. But we don't know that this works, which changes a lot about the equation.
- Is sneaking an AI into a courtroom through an earpiece really a cost-effective accessible strategy for poor people? Nothing about this screams "accessibility" to me.
I think summing up the last two points, if the AI was proven to actually work in a court of law, and was an accessible option, then sure, at that point I think the argument would have a lot more weight. It wouldn't be ideal, it would be a bad state for us to be in because your constitutional rights should not depend on an AI. But I could see a strong argument for using the AI in the meantime.
But that doesn't mean that DoNotPay should do unethical things right now to get to that point. The way that your choice is being phrased is begging the question: it assumes that the AI is the only choice other than no representation, that it does work, and that it will produce better outcomes.
But we don't actually know if the AI does work in a court of law, and DoNotPay's decision was to "move fast and break things"; it was to start releasing it into the wild without knowing what would happen. We don't know if asking people to represent themselves with a secret earpiece is a good legal strategy or if it's accessible. We don't know what happens when something goes wrong. We don't know that this actually is a working solution. But they were putting someone's legal outcome on the line anyway.
I think there's a big difference between making an imperfect solution available to poor people because we don't have anything better to offer, and using poor people as experimental fodder to build an imperfect solution that might not work at all. There's a lot of assumption here that using their AI would be better than representing yourself, and I don't know that's true. A judge is not going to pleased with being used as an experiment. And I've been hearing people say that the AI subpoenaed the officer involved in the ticket? That's not a good legal strategy.
The proper way to build a solution like this is to make sure it works before you start using it on people, and I think it's unethical to give someone bad legal advice and to try and justify it because giving that person bad legal advice might allow the company to help other people down the line. A lot of our laws around legal representation are predicated on the idea that legal advice should be solely focused on the good of the client, and not focused on the lawyer's career, or on someone else the lawyer wants to help, or on what the lawyer will be able to do in the future. Based on what we know about the state of the AI today, it doesn't seem like DoNotPay was thinking solely about the good of the person they were advising. We really don't want the legal industry to be an industry that embraces "the ends justify the means."
Yeah I feel like you're right on the money on re: the ethics of using someone who is in legal trouble who will have to live with the results. It's not as sexy but they should just build a fake case (or just use an already settled one if possible) and play out the scenario. No reason it wouldn't be just as effective as a "real" case.
I'd have no objections at all to them setting up a fake test case with a real judge or real prosecutors and doing controlled experiments where there's no actual legal risk and where everyone knows it's not a real court case. You're right that it wouldn't be as attention-grabbing, but I suspect it would be a lot more useful for actually determining the AI's capabilities, with basically zero of the ethical downsides. I'd be fully in support of an experiment like that.
Run it multiple times with multiple defendants, set up a control group that's receiving remote advice from actual lawyers, mask which group is which to the judges, then ask the judge(s) at the end to rank the cases and see which defendants did best.
That would be a lot more work, but it would also be much higher quality data than what they were trying to do.
And in some ways it’s less work! The risks of using a real court case are massive if you ask me. We are a wildly litigious country. No amount of waivers will stop an angry American.
> Run it multiple times with multiple defendants, set up a control group
And also
> That would be a lot more work, but it would also be much higher quality data
I don’t know much about the field of law, but anecdotally it doesn’t strike me as particularly data driven. So I think, even before introducing any kind of AI, the above would be met with a healthy dose of gatekeeping.
Like the whole sport of referencing prior rulings, based on opinions at a point in time doesn’t seem much different than anecdotes to me.
It's about volume. A fake case would be expensive to run and running dozens of them a day would be hard.
That said. The consequence of most traffic tickets is increased insurance and a fine. Yes these do have an impact on the accused, but they are the least impactful legal cases, so it would make sense to focus on them as test cases.
Is this not what moot court is? Seems like a great place to test and refine this kind of technology. The same place lawyers in training are tested and refined.
> Pretty sure the whole reason why DoNotPay actually exists is because defending against parking tickets didn't actually require a strong defense. The tickets were flawed automation...
I have some past experience working in the courts in my state, and I know there are many judges who are perfectly fine with dismissing minor traffic infractions for no reason other than that they feel like it. If you've got an otherwise clean traffic abstract and sent in a reasonable sounding letter contesting the infraction, these judges probably aren't going to thoroughly read through every word of it and contrast it with what was alleged in the citation. They don't really care about the city making an extra $173 off your parking ticket -- they just want to get through their citation reviews before lunch. Case dismissed.
So I am not surprised at all by the success of DoNotPay for minor traffic infractions. Most traffic courts are heavily strained by heavy case loads. If you give them a reason to throw your case out so they can go home on time, by all means, they will take it.
And I don't think anyone here has an issue with DoNotPay providing pre-trial advice and tips for someone defending themselves. It's bringing that into the courtroom that crosses a line from defending yourself to hiring an AI lawyer, and that line is where I'm very uncomfortable.
Thinking about how the problem would "elegantly solve itself" seems to illustrate the issue.
Someone using it in an actual courtroom would make a boneheadedly dumb argument or refer to a nonexistent precedent or something. Then maybe the judge gets upset and gives them the harshest punishment or contempt of court or they just lose the case. They may or may not ever get a chance to fix it.
A failure mode of jail time and/or massive fines for your customers doesn't sound all that elegant to me. This isn't a thing to show people cat pictures, I don't think move fast and break things is a good strategy.
Not to say that there aren't some entrenched possibly corrupt and self-serving interests here. But that doesn't mean they don't have a point.
It's probably better than the existing alternative. Which is roughly plead guilty because you don't have money to pay a lawyer. Or don't sue someone because you don't have money to pay a lawyer.
Judges would be absolutely right to punish lawyers or defendants that are bullshitting the court. They are wasting time and resources that would otherwise go towards cases where people are actually representing themselves in good faith.
The specific scenario doesn't matter. It's illegal to represent someone else in court if you're not a lawyer. There are a lot of things that you can't get a second chance at if your lawyer messes up that suing them can't fix. Lawyers and judges also negotiate, which a machine can't do because nobody feels an obligation to cut them some slack. Also now you're tainting case law with machine-generated garbage. Everything about the justice system assumes humans in the loop. You can't bolt on this one thing without denying people justice.
Not tricky at all. If someone is receiving counsel, then someone is giving counsel. Hiding behind a machine adds a pretty minor extra step to identifying the culprits, but does not create ambiguity over whether they are culpable.
On the other hand, here's a lawyer who thinks it would not count as legal representation, but hasn't seen the arguments made against it yet. Food for thought.
If you can sell a book that helps teach someone how to represent themselves, why can't you sell a person access to a robot that helps teach them how to represent themselves?
You're still illegally providing legal counsel if you're not a lawyer, or commiting malpractice if you are. Using a machine to commit the same crime doesn't change anything.
"Speech" would be like publishing a book about self-representation. "Counsel" would be providing advice to a defendant about their specific case. The machine would be in the courtroom advising the defendant on their trial, so that's counsel.
If the book was written about a particular case, that seems like specific legal advice.
If the book was a generalized "choose your own adventure" where you compose a sensible legal argument from selecting a particular template and filling it in with relevant data - use of the book essentially lets the user find the pre-existing legal advice that is relevant to their situation.
Chatbots as a system are arguably a lot more like the latter than the former - its a tool that someone can use to 'legal advise' themselves.
Are you still referring to the scenario from the article, or a different one where it's a resource you use outside of court?
> Here's how it was supposed to work: The person challenging a speeding ticket would wear smart glasses that both record court proceedings and dictate responses into the defendant's ear from a small speaker.
Also, probably wouldn't matter. The interactive human-ish-like nature might cross the line to being considered as counsel, even if you said it wasn't. See my response to your other comment.
Right, this strikes me as exactly the kind of "I'm not touching you!" argument that basically never works in a court of law. The law's not like code. "Well it's not any different than publishing a book, so this is just free speech and not legal representation"; "OK, cool, well, we both know that's sophist bullshit, judgement against you, next case."
By providing the words to say and arguments to make to the court, in response to a specific case or circumstance, DoNotPay was giving protected "legal advice" as opposed "legal information". There is ambiguity to find between legal advice and legal information, but that isn't.
A book gives legal information, not specific to a certain circumstance or case. If your chatbot is considering the specifics of a case before advising on a course of action, it's probably giving legal advice.
> Does anyone else have a better explanation for why there was such a visceral response?
It doesn't really matter if it'd work well or poorly. Lawyers don't want to be replaced, and being a lawyer entails a great ability to be annoying to delay/prevent things you don't want to happen.
It had to be shot down harshly because there are some premises to a courtroom proceeding that aren't met by an AI as we currently have.
One of those is that the lawyer arguing a case is properly credentialed and has been admitted to the bar, and is a professional subject to malpractice standards, who can be held responsible for their performance. An AI spitting out statistically likely responses can't be considered an actual party to the proceedings in that sense.
If a lawyer cites a non-existent precedent, they can make their apologies to the court or be sanctioned. If the AI cites a non-existent precedent, there's literally no way to incorporate that error back into the AI because there's no factual underlying model against which to check the AI's output--unless you had an actual lawyer checking it, in which case, what's the point of the AI?
Someone standing in court, repeating what they hear through an earpiece, is literally committing a fraud on the court by presenting themselves as a credentialled attorney. The stunt of "haha, it was really just chatGPT!" would have had severe legal consequences for everyone involved. The harsh response saved DoNotPay from itself.
> If the AI cites a non-existent precedent, there's literally no way to incorporate that error back into the AI because there's no factual underlying model against which to check the AI's output--unless you had an actual lawyer checking it, in which case, what's the point of the AI?
IANAL, but I would bet the level of effort to fact check an AI's output would be orders of magnitude lower than researching and building all your own facts.
I used it to generate some ffmpeg commands. I had to verify all the flags myself, but it was like 5 minutes of work compared to probably hours it would have taken me to figure them all out on my own.
Fact-checking nonsensical output would take a lot longer than researching a single body of law, which you can generally do by just looking up a recent case on the matter. You don't need to check every cite; that will have been done for you by the lawyers and judges involved in that case.
But checking every cite in an AI's output: many of those citations won't exist, and for the ones that do, you'll need to closely read all of them to confirm that they say what the AI claims they say, or are even within the ballpark of what the AI claims they say.
Fact checking an AI is still massively easier than finding and reading all the precedent yourself. Real lawyers of course already know the important precedent in the areas they deal in, and they still have teams behind the scene to search out more that might apply, and then only read the ones the team says look important.
Of course there could be a difference between an reading all the cases an AI says are important and actually finding the important cases including ones the AI didn't point you at. However this is not what the bet was about.
> Fact checking an AI is still massively easier than finding and reading all the precedent yourself.
Actually fact-checking an AI requires finding and reading all the precedent yourself to verify that the AI has both cited accurately and not missed contradictory precedent that is more relevant (whether newer, from a higher court, or more specifically on-point.)
If it has got an established track record, just as with a human assistant, you can make an informed decision about what corners you can afford to cut on that, but then you aren't really fact-checking it.
OTOH, an AI properly trained on one of the existing human-curated and annotated databases linking case law to issues and tracking which cases apply, overrule, or modify holdings from others might be extremely impressive—but those are likely to be expensive products tied to existing offerings from Westlaw, LexisNexis, etc.
What do you mean "finding"? The AI would just return links or raw text of the cases. Reading the findings would be the same as reading any precedence. But the AI could weight the results, and you'd only have to read the high scoring results. If the AI got it wrong, you'd just refine the search and the AI would be trained.
To the cost. If it removed the need for one legal assistant or associate then anything less than the cost of employing said person would be profit. So if it cost < 50k a year you'd be saving. (cost of employing is more than just salary)
You can't validate that it is making the right citations by only checking the cases it is citing, and the rankings it provides of those and other cases. You have to validate the non-existence of other, particularly contrary, cases it should be citing either additionally or instead, which it may or may not have ranked as relevant.
> You don't need to check every cite; that will have been done for you by the lawyers and judges involved in that case.
Why would this be different with an AI assistant to help you? It's not a binary "do or do not". Just because you have an assistant doesn't mean you don't do anything. Kind of like driver assist can handle some of the load vs full self-driving.
> But checking every cite in an AI's output: many of those citations won't exist, and for the ones that do, you'll need to closely read all of them to confirm that they say what the AI claims they say, or are even within the ballpark of what the AI claims they say.
But you'd have to do this anyway if you did all the research yourself. At least the AI assistant can help give you some good leads so you don't have to start from scratch. A lazy lawyer could skip some verifying, but a good lawyer would still benefit from an AI assistant as was my original bet, just like they would benefit from interns or paralegals, etc. And all those interns and paralegals could still be there, helping verify facts.
But you'd have to do this anyway if you did all the research yourself. At least the AI assistant can help give you some good leads so you don't have to start from scratch.
No, that's exactly the opposite of what I'm saying. If you did the research yourself, you wouldn't need to verify every cite once you find a relevant source/cite, because previous lawyers would have already validated the citations contained within that source. (A good lawyer should validate at least some of those cites, but frequently that's not necessary unless you're dealing with big stakes.)
And the AI assistant, at least this one and the ones based on ChatGPT, don't provide good leads. They provide crap leads that not only don't exist, but increase the amount of work. And any "AI" based on LLM will never be capable of providing good cites, because they'll never understand what they're reading and/or citing, and they'll miss relevant citations that are not statistically likely (i.e., new case law, or cases with similar facts, or similar law, or otherwise similar contexts that can be applied to the case at hand) that a context-aware AI or living breathing human would find easily.
At best, LLM-based AI might be able to help people with very simple legal situations. But you don't need AI for that. A single decision tree is easier to implement, and it's even easier to verify the domain-specific process and outcomes to make sure you don't get something silly like happened with this "AI".
But when appearing in court you're in real-time: you can't take 5 minutes to validate the AI output before passing it on. You can do that for your opening statements but once faced with the judge's rulings or cross-examination you'll be in the weeds.
Yeah that's fair, although if it was AI-assisted lawyer then presumably you'd have done the research ahead of time. But, for spontaneous stuff, you're totally right. My original statement was thinking about it as a "prep time" exercise, but spontaneous stuff would appear in court. Although, the human lawyer (who should still be simiarly prepared for court) would be there to handle those, possibly with some quick assistance.
If it was AI-assisted lawyer, it would be a whole different discussion. Aside from requiring a live feed of interactions to a remote system and other technical details, “lawyers using supportive tools while exercising their own judgement on behalf of their client” isn’t controversial the way marketing an automated system as, or as a substitute for, legal counsel and representation is.
I don't understand the "cites a non-existent precedent" bit. Presumably the AI would have a database of a pile of precedent. It wouldn't make up cites. It would have "knowledge" of so much precedence, it could likely find something to win either side of the argument.
I think you're misunderstanding how the model works. It predicts next tokens based on past tokens and the LLM trained on large bodies of text. It doesn't have an underlying database of "factual" elements it incorporates or searches, and its output doesn't have an underlying semantic structure that can be verified or reasoned about. The entirety of the quality of its output can only be judged by whether it "sounds" like the rest of the text on which it was trained.
I think making the connection between the predictive output and an underlying representation of reality is the next great step, but until that happens, chatGPT's output is just amazing mimicry of human language.
>That said, if it's such a catastrophically stupid idea, I'm not really sure why it had to be shot down so harshly
The title of the article seems misleading.
A techbro who doesn't appear to be a lawyer or has any understanding of the law wants to use AI so people can defend themselves. It doesn't seem like any of this was done with input from any bar associations. Without seeing the emails and "threats", and ignoring the emotional language it sounds like these people were helping him out:
>"In particular, Browder said one state bar official noted that the unauthorized practice of law is a misdemeanor in some states punishable up to six months in county jail."
Were these emails "angry" or just stating very plainly and with forceful language, that if you do this without the AI having the appropriate qualifications, you are most probably going to jail?
It even sounds like Browder didn't really widely publicise the fact that a case defended by an AI was about to happen.
>As word got out, an uneasy buzz began to swirl among various state bar officials, according to Browder. He says angry letters began to pour in.
Really sounds like these letter writers did him a favour.
> That said, if it's such a catastrophically stupid idea, I'm not really sure why it had to be shot down so harshly
To avoid the catastrophy that makes it a catastrophically bad idea.
> I assume the real reason it was shot down was out of fear that it would work well. Does anyone else have a better explanation for why there was such a visceral response?
It had already worked badly (subpoenaeing the key adverse witness, who would provide a basically automatic defense win, and one of the most common wins for this kind of case, if they failed to show up.)
DoNotPay exists because AI vaporware is the new crypto vaporware, which was the new IoT vaporware, which was the new Web2 vaporware, and so on. Build a "product" on AI and you get (in this case) $28 million in funding. Pull stunts like this to generate a little buzz for the next round of funding. Then bail out with your golden parachute. Now you have experience founding a startup - do it again for $50 million.
This is the obvious point: they fear it would work well and they will have to slowly say good bye to their extremely well paid profession.
We are so close from a new disruptive revolution where a lot of jobs (not just lawyers) will be made obsolete. Possibly similar to inventions like assembly lines, or cars. Such an exciting time to be alive!
1) The legal profession tries to instill a sense of ethics into the lawyers they train, and what DoNotPay was proposing violated that ethics. I don't want to overyhype the legal profession, but many (maybe even most) lawyers really do want to do the right thing, which by their lights means clients get the best representation possible. Which an LLM currently is not, so you get a visceral reaction from people at what they see as discussion of/advocacy of unethical conduct.
2) As a practical matter, it was likely to yield very bad outcomes for all concerned, including professional and legal consequences. Bluntly, DoNotPay was proposing to do something illegal, it really could have resulted in jail time, and possible disbarrment for any lawyers who were involved. For good or ill, judges have immense power to control what goes on in their courtrooms, and the risk of a judge taking offense to this is high. And the higher profile the case (and DoNotPay was offering $1m for a lawyer who would repeat whatever the chatbot said in a Supreme Court case) the higher the stakes. That really could be a career ending mistake for a junior lawyer (plus, almost by definition, anything that reaches the Supreme Court is important; bad representation could throw the result, with potentially terrible consequences for the country).
3) This is not, by any means, the first attempt to try and automate or streamline the provision of basic legal services. And it's a field ripe for such things; simple stuff like many rental agreements, employment contracts, sueing in small claims court, divorce agreements, etc., etc. all seem like they should be able to be generated by filling out a form and pressing a button, not hiring a high priced professional to craft a bespoke document. But over and over, such attempts have failed badly, and lawyers are getting reflexively defensive to anything that looks like that, not because they threaten their jobs, but because they keep being so terrible.
4) And DoNotPay looks a lot like these previous failures. See, eg, https://www.techdirt.com/2023/01/24/the-worlds-first-robot-l... which makes them look less like some cutting edge AI lawyer that will put an army of paralegals out of work, and more like yet another shitty site trying to resell Mechanical Turk at an enormous markup. Now, maybe what they're showing to the public is entirely different than what they want to use in courtrooms, but...it doesn't build confidence.
In many ways I think this is analagous to self driving cars. There is scope for enormous gains here, it's probably inevitable eventually, and it will probably put a lot of delivery drivers, taxi drivers, and truck drivers out of a job when it finally arrives. But right now deciding to hook your new self driving AI up to an unmarked truck and do a secret test in an unnamed US city without warning anyone or getting any licenses or permits is unethical and illegal, and you'd expect a very negative reaction if you announce your plans on Twitter. Especially if your website offers some software that they claim can control an RC car, but when you try it, it keeps steering into walls.
> I assume the real reason it was shot down was out of fear that it would work well
The next lawyer (or AI expert!) I see who thinks it would work well will be the first. There's just an enormous mismatch between the requirements, what chatbots in general seem to be capable of right now, and what DoNotPay in specific seems to be capable of. (Again, see the the Tech Dirt link.)
And again, note that while there's a pretty strong argument that it should be faster, easier, and cheaper to create a rental agreement (or whatever), DoNotPay was making a big deal out of how they wanted to argue in front of the Supreme Court. DoNotPay doesn't seem like they're going to nail the "draft a rental agreement for cheap" any time soon, but maybe they (or someone else) can. But handing a Supreme Court case? Obviously not. Now you might argue (and one certainly hopes!) the whole Supreme Court thing was just an unserious PR stunt, but when a company is making undifferentiated claims that their tech can do something semi-plausible and something entirely impossible, it makes the entire thing look like a scam created by or aimed at people who just don't know any better. Not a good look!
In short: I think the negative reaction is very, very understandable.
Browder (founder) appeared to also acknowledge that it was not fit for purpose as well [0].
If something that's providing input to a formal legal process (which, let's not forget, means false or inaccurate statements have real and potentially prejudicial repercussions), "makes facts up and exaggerates", then there seems to be no reason they should be talking about taking this anywhere near a courthouse.
This feels a lot like "move fast and break things" being applied - where the people silly enough to use this tool and say whatever it came up with would end up with more serious legal issues. It seems like that only stopped when the founder himself was the one facing the serious legal issues - 'good enough for thee, but not for me'...
I think what many are overlooking is that bad inputs to the legal system can jeopardise someone's position in future irretrievably, with little or no recourse (due to his class action/arbitration waiver). Once someone starts down the road of legal action, there's real consequences if you get it wrong - not only through exposure, but also through prejudicing your own position and making it impossible to take a different route, having previously argued something.
I think you’re right here and it’s the same reason I see AI as a tool in the software profession. You can use it to speed up your work, but you have to have someone fully trained who can tell the difference between looks good but is wrong, versus is actually usable.
I’ve been using copilot for half a year now and it’s helpful, but often wrong. I carefully verify anything it gives me before using it. I’ve had one bug that made it to production where I think copilot inserted code that I didn’t notice and which slipped past code review. I’ve had countless garbage suggestions I ignored, and a surprising amount of code that seemed reasonable but was subtly broken.
This will still require a human lawyer (and/or intern, depending on the stakes) to check its output carefully. I am not now, nor have I ever been afraid that AI is coming after my job. When it does, we’re dangerously close to general AI and a paradigm shift of such magnitude and consequence that it’s called The Singularity. At which point we may have bigger worries than jobs.
> I’ve had one bug that made it to production where I think copilot inserted code that I didn’t notice and which slipped past code review
I'm not saying this is good but come on. Humans do that all the time, why aren't we so harsh on humans?
> I am not now, nor have I ever been afraid that AI is coming after my job
I am. This thing amazed me, and even if it won't be able to 100% replace humans (which I doubt), it can make juniors orders of magnitude more productive for example. This will be a complete disruption of the industry and doesn't bode well for salaries.
I'm in the top x% of my profession, after 20 years of grinding. I'm unafraid. I don't see my salary taking a cut anytime soon. When I was searching for a job back in March, I had a 50% offer to response rate (if the company responded to my application.)
People who are just skating by may have cause for concern. But those are the people with the most to gain from it, so maybe not even them.
Demand is so high in the business, I have trouble imagining that any tool could make a meaningful impact on that.
It would need to multiply productivity by a huge number, and nothing has that impact. Copilot is barely, optimistically, above 10%. I don't really think I get that much, but just for arguments sake.
Broken window fallaccy. Software is so bad, that AI making is more productive won't put us out of work, it will make software suck a bit less. It eill5 make bugfixes and tech debt payback more affordable.
> I’ve been using copilot for half a year now and it’s helpful, but often wrong.
I wonder if that is because of the training set, us humans are often wrong or different. If given a room of programmers and asked to implement a Fibonacci algorithm would they all get it right, would they all do it via iteration, recursion, dynamic programming. Co-pilot might not replace you, but it just needs to replace some of those programmers. Then add tools like automated AI reviews or integration tests for example and now you removed another population of tech workers.
I am not sure if that is cause for alarm or the fact that such improvements could be rather beneficial. Some tools will replace people, some will be assistive, and as they improve and other layers are added they will reduce the need for people in areas, improving efficiency and productivity. Robots in manufacturing for example, improve productivity and reduce human labor.
And this leads on to this, these tools are narrow to general and have achieved this in a very short period of time. The cost factors have also massively reduced, if you could pay OpenAI $10 a month to make 1000 mistakes and still deliver code, or a human $120k a year to do the same then which one would you target. AI might not be coming for your job soon but it will be taking away your options to get a job. This is not unique to AI it is the basis for any technological improvement vs labor, yes new ideas and opportunities may come out of this but I don't think they will be equal in volume.
Yes! I think the legal system would and should look differently at a tool like this in the ear of a licensed lawyer, and AI tools will be invaluable for legal research. I just don't think the output of an AI should be fed directly into a non-lawyer's ear, any more than I think a non-programmer should try to build a startup with ChatGPT as technical co-founder.
What's interesting is that sometimes it does a great job at something like telling you the holdings of a case, but then other times it gives you a completely incorrect response. If you ask it for things like "the X factor test from Johnson v. Smith" sometimes it will dutifully report the correct test in bullets, but other times will say the completely wrong thing.
The issue I think is that it's pulling from too many sources. There are plenty of sources that are pretty machine readable that will give it good answers. There's a lot of training that can be eked out from the legal databases that already exist that could make it a lot better. If it takes in too much information from too many sources, it tends to get garbled.
There are also a lot of areas where it will confuse concepts from different areas of law, like mixing up criminal battery with civil battery, but that's not the worst of the problems.
> The issue I think is that it's pulling from too many sources. There are plenty of sources that are pretty machine readable that will give it good answers. There's a lot of training that can be eked out from the legal databases that already exist that could make it a lot better. If it takes in too much information from too many sources, it tends to get garbled.
No, this is a common misunderstanding about the way these things work. A LLM is not really pulling from any sources specifically. It has no concept of a source. It has a bunch of weights that were trained to predict the next likely word, and those weights were tuned by feeding in a large amount of text from the internet.
Improving the quality of the sources used to train the weights would likely help, but would not solve the fundamental problem that this isn't actually a lossless knowledge compression algorithm. It's a statistical machine designed to guess the next word. That makes it fundamentally non-deterministic and unsuitable for any task where factual correctness matters (and there's no knowledgeable human in the loop to issue corrections).
One useful way to think of language models is that they are statistical completion engines. It attempts to create a completion to the prompt, and then evaluates the likelihood, in a statistical sense, that the completion would follow the prompt, based on the patterns in the training data.
A citation in legalese is very common. A citation that is similar or identical to actual citations, in similar contexts, is therefore an excellent candidate for the completion. A fake citation that looks like a real citation is also a rather good candidate, and will sometimes squeak past the "is this real or fake?" metric used to evaluate generated potential responses.
This may seem like "pulling from a source" but there is no token, semantic information, or even any information in the model about where and when the citation was encountered. There is no identifiable structure or object (so far as anyone can tell anyway) in the model that is a token related to and containing the citation. It just learns to create fake citations so convincingly, that most of the time they're actually real citations.
This explains some of the particular errors that I've seen when poking and prodding it on complex legal questions and in trying to get it to brief cases.
ChatGPT can provide correct citations because somewhere deep in its weights it does lossily encode real texts and citations to real texts. That makes real citations in some cases be its most confident guess for what is supposed to come next in the sentence. But when there isn't a real text it is confident about giving a citation about, but it still feels like a citation should be next in the output, it will happily invent realistic looking citations to texts that have never existed and it has never seen in any sources. On the outside, as readers, it's hard to tell when this occurs without getting an outside confirmation. I'm assuming though that to some degree it is itself aware that a linked citation doesn't refer to anything
A few days ago on HN there was a short story of five paragraphs that started badly & finished OK, and I wondered if some operations research tricks could be applied. One is forward-backward-forward planning, this produces better schedules, and if applied would create a better opening, and perhaps ending (i.e. model is run three times).
In the case of citations, you really need a language model & a fact model. The language model then passes over to the fact model, then back to the language model. This means double(+) training.
I suppose the fact model could include things like Wolfram (also discussed on HN).
Asking "Can you cite some legal precedence for lemon law cases?" gives an answer containing
"In California, for example, the California Supreme Court in the case of Lemon v. Kurtzman (1941) held that a vehicle which did not meet the manufacturer's express warranty was a "lemon" and the manufacturer was liable for damages."
I dont think that case exist, there is a first amendment case Lemon v. Kurtzman, 403 U.S. 602 (1971) though.
I can't find any reference to Kurtzman or 1941 in any of the references. I think the answer is that the AI generating the text, and the code supplying the references are distinct and do not interact.
The example you give isn't necessarily a valid one. You're asking for a specific piece of knowable, measurable data -- one that has a single right answer and many wrong answers. Legal questions may have conflicting answers, they may have answers that are correct in one venue but not in another, etc., I have't yet seen any examples of an AI drawing the distinctions necessary for those situations.
ChatGPT has a lot of trouble understanding jurisdiction and what constitutes controlling precedent over what in which jurisdictions. As in it has no conception of it at all and gets it really badly mixed up. It doesn't understand the hierarchy of any court system so there are some questions that it will just always get wrong.
> If it is never pulling from a source, then why is it able to provide citations?
If you have the training set, and models that summarize text and/or assess similarity, and some basic search engine style tools to reduce or prioritize the problem space, it seems intuitively possible to synthesize probably-credible citations from a draft response without the response being drawn from citations the way a human author would.
Kind of a variant of how plagiarism detection works.
You.com is hugged to death right now, but from what I can see it's a different kind of chatbot. It looks closer to Google's featured snippets than it is to ChatGPT.
That kind of chatbot has different limitations that would make it unsuitable to be an unsupervised legal advice generator.
What if we prompt the LLM to generate a response with citations, and then we have program which looks up the citations in a citation database to validate their correctness? Responses with hallucinated citations are thrown away, and the LLM is asked to try again. Then, we could retrieve the text of the cited article, and get another LLM to opine on whether the article text supports the point it is being cited in favour of. I think a lot of these problems with LLMs could be worked around with a few more moving parts.
Definitely, no one is arguing that an AI lawyer will be the near future, but I can totally see it being good enough for the vast majority of small scale lawsuits within 10-20 years.
DoNotPay seems to know very well what they’re doing.
It really doesn’t strike me as true that law requires absolute precision. There are many adjacent (both near and far) arguments that can work in law for any given case, since the interpreter is a human. You just need no silly mistakes that shatter credibility, but that’s very different from “get one thing wrong and the system doesn’t work at all or works in wildly unexpected ways.”
Low end law will be one of the first areas to go due to this tech. DoNotPay actually has already been doing this stuff successfully for a while (not in court proceedings themselves though).
There are also many adjacent algorithms that could solve the same problem, but you still need to execute the algorithm correctly. LLMs are not ready for unsupervised use in any domain outside of curiosity, and what DoNotPay is proposing would be to let one roam free in a courtroom.
I'm not at all opposed to using LLMs in the research and discovery phase. But having a naive defendant with no legal experience parroting an LLM in court is deeply problematic, even if the stakes are low in this particular category of law.
That’s nowhere near analogous because between every working algorithm are massive gulfs of syntactic and semantic failure zones. This is not the case with human language, which is the whole power of both language production and language interpretation.
Is it more problematic than this person 1) not being represented, 2) having to pay exorbitant fees to be represented, or 3) having an extremely overworked and disinterested public defender?
I’m not convinced.
The idea that we need to wait to do this stuff until the people whose profession is under threat give “permission” is dismissible on its face and is exactly why we should be doing this is as quickly as possible. For what it’s worth, I mostly agree with you: I’m doubtful the technology is there yet. But that’s a call for each defendant to make in each new case and so long as they’re of sound mind, they should be free to pick whatever legal counsel they want.
> 1) not being represented, 2) having to pay exorbitant fees to be represented, or 3) having an extremely overworked and disinterested public defender?
You’re leaving off being put in jail for contempt of court, perjuring oneself, making procedural errors that result in an adverse default ruling, racking up fines, et cetera. Bad legal representation is ruinous.
Gee good thing everyone in court gets good representation at reasonable prices eh?
I get that lawyers think their profession is important (it is) and that by and large they’re positive for their clients (they are), but there are a lot of people who simply do not have access to any worthwhile representation. I saw Spanish-speaking kids sent to juvenile detention for very minor charges conducted in English with no interpreter and a completely useless public defender. So in my view that is the alternative for many people, not Pretty Decent Representation.
There are people who can stomach the downside risks to push this tech forward for people who cannot stomach the downside risks of the current reality.
Do you know that? How do you know that’s what the model would do? How do you know that the defendant doesn’t have an 80 IQ and an elementary school grasp on English? Do you think this doesn’t happen today and that these people don’t get absolutely dragged by the system?
We know that subpoenaing [possibly] no-show cops is what the model will do because that is what the CEO says the model did in the run up to this particular case.
Someone with an 80 IQ and an elementary school grasp of English is going to get absolutely dragged by the system with or without a "robot lawyer" if they insist on fighting without competent representation, but they'd probably still stand a better chance of getting off a fine on a technicality if they weren't paying a VC-backed startup to write a letter make sure the cops turned up to testify against them.
They'd also be more likely to not get absolutely dragged if they listened to a human that told them not to bother than a signup flow that encouraged them to purchase further legal representation to pursue the case.
Last time I checked with a lawyer about a traffic ticket he told me that it wasn't worth his time to go to court (this was the school provided free legal service and the case was just a burnt headlight that I didn't have verified fixed within a week, you decide if he should have gone to court with me or if he would have for a more complex case), but I was instructed how to present my case. I got my fine reduced at least, which was important as a student paying my own way (I'm one of the last to pay for college just by working jobs between class, so this was important)
Yeah, that's because he's an attorney that gets paid a lot. go to the traffic court and you'll find the ones that don't get paid a lot. That's why they are hanging out in traffic court representing litigants there.
I mean, not contesting the ticket is likely to be a better option than delegating your chances of not being convicted of contempt of court or perjury to the truthfulness and legal understanding of an LLM...
Sure if your objective is to minimize your own personal exposure. If your goal is to push toward a world where poor folks aren’t coerced into not contesting their tickets because they can’t afford to go to court or get representation, then maybe it is a good option.
I prefer a world in which people pay a small fine or make their own excuses rather than pay the fine money to a VC-backed startup for access to an LLM to mount a legally unsound defence that ends up getting them into a lot more trouble, yes.
If your goal is to ensure poor folks are pushed towards paying an utterly unaccountable service further fees to escalate legal cases they don't have the knowledge to win so the LPs of Andreesen Horowitz have that bit more growth in their portfolio, I can see how you would think differently.
> goal is to push toward a world where poor folks aren’t coerced into not contesting their tickets
Invest in a company doing this properly and not pushing half-baked frauds into the world. Supervised learning, mock trials. You’re proposing turning those poor folk into Guinea pigs, gambling with their freedom from afar.
This company has been doing this stuff for years. Yes this is a big step forward but it’s not from zero, as you’re suggesting. What makes you think they haven’t been doing mock trials and tons of supervised learning?
And no, I’m not. I don’t think Defendant Zero (or one, or two, or 100) should be people whose lives would be seriously affected by errors. I’m pretty sure DNP doesn’t either.
> What makes you think they haven’t been doing mock trials and tons of supervised learning?
The CEO tweeting they fucked up a default judgement [1]. That not only communicates model failure, but also lack of domain expertise and controls at the organisational level.
This is a false dichotomy that you are making up in order to politically justify your narrative that is otherwise completely made up nonsense best described as legal malpractice.
Alternative spin on the "know very well what they're doing": they know very well that it's unlicensed practice of law and they'd have to withdraw from the case.
But doing so generates lots of publicity for their online wizards that send out boilerplate legal letters.
The CEO tweeted about the system subpoenaing the traffic cop. If they actually built a system which is so advanced it can handle a court case in real time and yet so ignorant of the basics of fighting a traffic ticket it subpoenas the traffic cop it's... a very odd approach to product management for the flagship product of a legal specialist, and a bit scary to think anyone would use it. Easier to make the mistake of claiming your flagship system does stuff it shouldn't be doing if it's just vaporware and you haven't put too much thought into what it should do
Their track record? Seems like this is the first you’re hearing of them, but this is just the latest (and yes, most ambitious) experiment. They’ve been successfully using technology to help normal people defeat abusive systems built by “the professions” for years.
So if a chat program can pass the bar exam, it's okay? Because I would bet that if a program can represent someone semi-competently in court, passing the bar exam which needs an objective marking criterion would be trivial by comparison.
Cart before the horse... you have to pass the bar before you get the chance to represent someone semi-competently in court. Generally, lawyers have 5 years of experience before they are considered competent enough to be semi-competently represent someone in court.
Most states also require a law degree in addition to passing the Bar.
But a fun fact is that magistrates generally aren't required to pass the Bar, nor hold a law degree. Most states require extremely basic courses of 40 or so hours of training. I know of a magistrate that has tried numerous times to pass the Bar and has failed. I'm not sure how much competence our system mandates.
If you make a ridiculous argument using confabulated case law as a lawyer, you can be subject to discipline by the state bar and even lose your law license. The legal system's time and attention is not free and unlimited and that's why you need a license to practice law.
The judges and so forth don't want to deal with a bunch of people talking nonsense. Who is the lawyer who is putting their reputation on the line for the AI's argument? The people doing this want to say nobody is at fault for the obviously bogus arguments it's going to spout. That's why it's unacceptable.
Well, the problem is that the defendant has a right to competent representation, and ineffective assistance of counsel fails to fulfill that right.
(Your hypothetical includes a fine, so it isn't clear whether the offense in your hypothetical is one with, shall we say, enhanced sixth amendment protections under Gideon and progeny, or even one involving a criminal offense rather than a simple civil infraction, but...) in many cases lack of a competent attorney is considered structural error, meaning automatic reversal.
In practice, that means that judges (who are trying to prevent their decisions from being overturned) will gently correct defense counsel and guide them toward competence, something that frustrated me when I was a prosecutor but which the system relies upon.
Seems like the solution is clear then. If the judge gently corrects defense counsel and guides them towards competence, they can just do the same with AI. Then the company can use that data to improve it! Eventually it will be perfect with all the free labor from those judges.
>Judge: that case does not exist. Ask it about this case instead
>AI: I apologize for the mistake, that case is more applicable. blah blah blah. Hallucinates an incorrect conclusion and cites another hallucinated case to support it.
Judge: The actual conclusion to the case was this, and that other case also does not exist.
Isn't that the same thing? Seems fine to me, I know the legal system is already pretty overwhelmed but eventually it might get so good everyone could be adequately represented by a public defender.
Speaking of, I remember reading most poor people can only see the free lawyer they've been assigned for a couple minutes and they barely review the case? I don't understand how that is okay, as long as technically they're competent even if the lack of time makes them pretty ineffective...
Ehhh... the judge's patience for that kind of thing is not unlimited. At some point they're going to reopen the inquiry about counsel (there's also the issue that an AI probably can't be your counsel of record, since it hasn't passed the bar; more likely the court would view it as you representing yourself with the assistance of research software).
BREAK BREAK
My jurisdiction (the military justice system) is a bit of an oddball, but I generally (softly) disagree with your last paragraph.
In jurisdictions with good criminal justice systems, most cases don't take that long to review. Possession of marijuana case, the officer stopped you for a broken taillight, smelled or claimed to smell marijuana, asked you if it was okay if he takes a look. You said okay. He finds a tiny amount of a green leafy substance in a small plastic bag. He says, "Hey look, marijuana is not that big a deal but lots of this stuff is laced with other things. This is just marijuana, right?" and you respond "Yes, sir, just marijuana, I don't mess with any real drugs." Prosecution is offering a diversion deal with probation and no permanent criminal record.
The correct answer in that case is to take the deal. We are not going to win by arguing that Raich was wrongly decided or that the officer lied about smelling it (because we're probably talking state court, so Raich doesn't matter, and you consented to the search, so the pretext, even if it was pretextual, also doesn't matter). We also aren't going to win attacking the chain of evidence, because the drug lab results don't matter, because you admitted it's marijuana.
In that case, yes, I'm going to take all of about 4 minutes to strongly advise the client to take the deal.
Oh, that's actually a relief that the reason they take so little time with clients before telling them to take the deal is simply because the cases are generally clear cut. Although like many things, I'll bet this can vary by region significantly.
The rest of my comment I dropped the '/s' for. I think it's wild some people think current LLMs can replace lawyers... The absolute best I would think they could currently do is maybe speed up research for paralegals. I was just imagining expecting a judge to QA a private companies software and thought it was really funny.
A huge, huge number of cases are extremely cut and dry. Probably 80+% of the misdemeanor docket is drugs and traffic. You know what evidence is required to prove you were driving while your license was suspended? That you were driving (police cam) and that your license was suspended (certified DMV record).
I know it sounds scary when stats get thrown around like "Over 90% of defendants plead guilty without a trial!" but that's usually pursuant to a generous deal from an overworked prosecutor's office who absolutely does not want to do a freaking jury trial because you were desperate to get to work and couldn't get a ride.
Obviously, if the underlying reason your license was suspended is extreme (vehicular manslaughter, 3rd+ DUI) then their tune will probably change, but that's an extreme minority of cases. Those are the ones that go to trial.
"Counsel, I'm unfamiliar with the case you've cited. Have you brought a copy for the court? No? How about a bench brief? Very well. I am going to excuse the panel for lunch and place the court in recess. Members of the jury, please return at 1:00. Counsel, please arrive with bench briefs including printouts of major cases at 12:30. Court stands in recess." bang
That’s not how the legal system works. You aren’t slipping anything through. Either the judge knows the case, they don’t know all the cases, or the judge will research or clerks will research and you will be sanctioned if you try to do so thing unethical.
IANAL, but I'd think in this case this is prosecutor's job.
Also, the original post is about the traffic ticket. I'm pretty sure if the judge hears a reference to something he had never heard before, he'll be like "huh? wtf?"
If this is the case, the lawyers should have nothing to fear, and the plaintiff nothing to lose but a parking ticket. I say we stop arguing and run the experiment.
As noted by several lawyers when some of the details of this experiment were revealed: The AI already committed a cardinal sin of traffic court in that it subpoenaed the ticketing officer.
Rule 1 of traffic court: If the state's witness (the ticketing officer) doesn't show, defendant gets off. You do not subpoena that witness, thereby ensuring they show up.
If the AI or its handlers cannot be trusted to pregame the court appearance even remotely well then no way in hell should it be trusted with the actual trial.
You want to run this experiment? Great, lets setup a mock court with a real judge and observing lawyers and run through it. But don't waste a real court's time or some poor bastard's money by trying it in the real world first.
A reminder that "Move fast and break things" should never apply where user's life or liberty is at stake.
While I agree in absolute terms, in legal terms it is problematic because it sets precedent, which is what the law often runs off of. Better to not breach that line until we're sure it can perform in all circumstances, or rules have been established which clearly delineate where AI assistance/lawyering is allowed and where it isn't.
> noted by several lawyers when some of the details of this experiment were revealed: The AI already committed a cardinal sin of traffic court in that it subpoenaed the ticketing officer
Mike Dunford is a practicing attorney. Embedded tweet is of a non-lawyer who screenshotted Joshua Browser DoNotPay CEO) saying the subpoena had been sent. He has since deleted those tweets as DNP backs away from this plan.
Non-lawyers aren't banned from giving legal advice because lawyers are trying to protect their jobs, they're banned from giving legal advice because they're likely to be bad at it, and the people who take their advice are likely to be hurt.
Yes, in this case, it would just be a parking ticket, but the legal system runs on precedent and it's safer to hold a strict line than to create a fuzzy "well, it depends on how much is at stake" line. If we know that ChatGPT is not equipped to give legal advice in the general case, there's no reason to allow a company to sell it as a stand-in for a lawyer.
(I would feel differently about a defendant deciding to use the free ChatGPT in this way, because they would be deliberately ignoring the warnings ChatGPT gives. It's the fact that someone decided to make selling AI legal advice into a business model that makes it troubling.)
>> Non-lawyers aren't banned from giving legal advice because lawyers are trying to protect their jobs, they're banned from giving legal advice because they're likely to be bad at it, and the people who take their advice are likely to be hurt.
But why would the opposing side's lawyers care about this? They presumably want their client to win the lawsuit.
I only have immediate knowledge of UK law, but lawyers will generally have a duty to the court to act with independence in the interests of justice. This tends to mean that in situations where one side are self-represented or using the services of ChatGPT, etc. the opposing side is under a duty not to take unfair advantage of the fact that one side is not legally trained.
They don't have to help them, but they can't act abusively by, for example, exploiting lack of procedural knowledge.
If they deliberately took advantage of one side using ChatGPT and getting it wrong because the legal foundation of knowledge isn't there for that person, that could be a breach of their duty to the court and result in professional censure or other regulatory consequences.
When did the opposing side's lawyers say anything about this? Are you confused? Law is a regulated profession. The lawyers pointing out that this is illegal aren't on the other side of the case...
Well, it is supposed to be a Justice system, and not a game. While it is very easy to forget that, and many of the participants in it clearly don't behave as such, the outcome of it should be to be just.
Ultimately though the argument you have set up here seems to make it all but impossible for AI to displace humans in the legal profession. If the argument is "precedent rules" then "only humans can be lawyers" is precedent.
I'm not sure if this particular case with this particular technology made sense - but I do think we need to encourage AI penetration of the legal profession, in a way that has minimal downside risk. (For defendants and plaintiffs, not lawyers.) It would be hugely beneficial for society if access to good legal advice was made extremely cheap.
No, if in a hypothetical future we have technology that is capable of reliably performing the role, I don't have a problem with it. This tech is explicitly founded on LLMs, which have major inherent weaknesses that make them unsuitable.
They are not scared that it will fail. They are scared that it will succeed. And there's a great reason to allow a company to sell a stand-in for a lawyer. Cost. This isn't targeted at people who can afford lawyers, it's targeted at people who can't, for now at least.
It's naive to think that a company would develop an AI capable of beating a lawyer in court and then sell it cheaply to poor people to beat traffic tickets. If anyone ever manages to develop an AI that is actually capable of replacing a lawyer, it will be priced way, way out of reach of those people. It will be sold to giant corporations so that they can spend $500k on licence fees rather than $1 million on legal fees. (And unless those corporations can get indemnities from the software vendor backed by personal guarantees they'd still be getting a raw deal.)
These people are being sold snake oil. Cheap snake oil, maybe, but snake oil nonetheless.
Lawyers aren't scared at all. It's traffic court, you are really overstating things. If it was a serious case, it'd be even more ridiculous to put more on the line by being represented by a computer algorithm that isn't subject to any of the licensing standards of an atty, none of the repercussions, and being run by a business that is disclaiming all liability for their conduct.
You know what an attorney can't do? Disclaim malpractice liability!
It'd be wondrous if the esteemed minds of hackernews could put their brain cycles towards actually applying common sense and other things rather than jerking off to edgy narratives about disruption while completely disregarding the relevant facts to focus on what they find politically juicy ("lawyers are scared it will succeed". It's a tautological narrative you are weaving for yourself that completely skirts past all the principles underlying the legal profession and it's development over hundreds of years.
Considering it's so bad it came to people's attention when it sent a subpoena to make sure someone came to testify against its client when he might have had a default judgement in his favour if they hadn't, I think the people who can't afford the lawyers have a lot more to be scared of than the lawyers...
And the reason lawyers are expensive is because bad legal advice usually costs far more in the long run.
>They are not scared that it will fail. They are scared that it will succeed.
Not really. There are more lawyers than legal jobs. A lot of lawyers are toiling for well under 100k a year. People pay 1500 dollars an hour for some lawyers and 150 an hour for others due to perceived (and actual) quality differences. Adding a bunch more non-lawyers isn't going to impact the demand for the 1500 dollars an hour lawyers.
Legal work is expensive because ANY sort of bespoke professional work is expensive. Imagine if software developers had to customize their work for each customer.
> are not scared that it will fail. They are scared that it will succeed
Lawyers make heavy use of automated document sifting in e.g. e-discovery.
Junior lawyers are expensive. Tech that makes them superfluous is a boon to partners. When we toss the village drunk from the bar, it isn’t because we’re scared they’ll drink all the booze.
We can certainly run the experiment, just like we can let a kid touch a hot pan on the stove.
Like the kid, the experiment is not to add knowledge to the world. Every adult knows touching a hot pan results in a burn. Just like everyone who understands how current LLMs work knows that it will fail at being a lawyer.
Instead the point of such an experiment is to train the experimenter. The kid learns not to touch pans on the stove.
In this case it’s not fair to metaphorically burn desperate legal defendants so that the leaders and investors in an LLM lawyer company learn their lesson. It’s the same reason we don’t let companies “experiment” with using random substances to treat cancer.
I mean, why not run it as an experiment? Fake Parking ticket, fake defendant, pay a judge to do the fake presiding. If the actual goal was to test it, it would be trivially easy to do. The goal here wasn't to test it, it was to get publicity.
Exactly. I asked it for books on Hong Kong history and it spit out five complete fabrications. The titles were plausible and authors were real people, but none of them had written the books listed.
That case, to me, is indicative of a larger problem - it's 75 pages of arcane justifications, and yet I already knew how all of the justices had voted just from reading the premise, because like every Supreme Court case in a politicized area it was decided by personal conviction and the rest is post-hoc rationalization.
There is no hallucination on the part of the humans involved, only intellectual dishonesty.
We are unique in the universe but not important to its existence. Our automated inference technologies are accurately representing is; I read half a dozen STEM papers last year, even more during lockdown. Comma splices, grammatical errors everywhere. ChatGPT is us in the aggregate.
Even the geniuses of our species are imperfect and hallucinate being better than they are given their accomplishments relative to the laymen.
The court of law is itself an ephemeral hallucination which fails all the time; given the number of people proven innocent, it’s been suggested through analysis up to 25% or more may be incarcerated incorrectly. Drug laws are just one instance of humans hallucinating correct application of courts. YT broke a while back when it AI got hung up on circular logic in a debate about copyright (easily googled).
The burden of proof of “correctness” is on humans to prove their society is not merely a titillating hallucination.
We made computing machines before we had all the abstract semantics to describe it. Do those semantics mean anything to the production of computing machine or are they just a jargon bubble for a minority to memorize and capitalize on relative to those who have no idea wtf they’re talking about.
LOL - you gotta be kidding. In software, we strive for that - by running tests. In the law, there are no tests.
Not saying that absolute precision isn't required. I know lots of cases were an extra comma, a wrong date, or a signature from the wrong person has cost someone tens of millions of dollars. I would argue that AI-based tools could prevent such HUMAN mistakes.
> I would argue that AI-based tools could prevent such HUMAN mistakes.
Still you won’t find a useful/sober lawyer who would argue a case in front of a judge based on a made up precedent which never existed in reality.
Making a mistake as you put it, in humans, is quite different than “hallucinations” of an LLM. The practical AI tool that is good for preventing such human mistakes (precisely) doesn’t exist _yet_.
I agree in many cases, except traffic court isn’t real court. It’s mostly procedural. Hell in my state, in many cases the judge is a person who wins an election and takes a 20 hour course, and prosecutions are conducted by police officers.
Lawyers like to wax about the interests of justice. Reality is for crap like this, it’s a revenue funnel where there’s a pecking order of people who get breaks. Some small fraction have an actual legal dispute of facts.
I’d argue that you could best serve the interests of justice by bringing the whole process online. You’d eliminate the “friends and family” discount and have a real tribunal process for cases that need it, instead of the cattle call and mockery that these courts are.
NYC actually does a decent job with this by outsourcing the whole thing to DMV and administrative law judges on a tight leash. It’s mostly justice by flowchart. That pushes bullshit down to the police.
>>The legal profession is perhaps the closest one to computer programming
There is a ton of people using ChatGPT for programming.... so much so that I wonder if we will have a crisis in skills as people forgo how to write code.
sysadmin circles have tons people celebrating how they will not have to learn powershell now as an example
ChatGPT has no concept of a model for code, no understanding of syntax or logic or even language keywords. It just pulls info out of it's backside that sounds believable like a lot of people do, it's just better at it. I suspect the immediate future will be resplendent with tales of AI-generated code causing catastrophes.
i use copilot every day, and it's never written more than one line at a time that didn't need adjusting. If you are letting an AI write code for you without knowing what it does, you should not be in programming. I would probably just fire someone if their response to me was ever "well the AI wrote that bit of code that deleted our DB, i didnt know what it did"
There’s a lot of people doing a big circle jerk over ChatGPT with wild ideas of singularity and oddly eagerly awaiting the end of white collar work. Whatever. I agree that programmers being obsessed with it can lead to skill atrophy. But, in reality, there are many people that are very technical and are not becoming reliant on these things for coding.
I agree, but it doesn't have to be that way. I've been learning a couple new languages and frameworks lately and found it really accelerates my learning, increases enjoyment, and is good at explaining the silly mistakes that I make.
So it can enhance skills just as much as it can atrophy them.
And I'm okay with some skills atrophying... I hate writing regular expressions, but they're so useful for some situations. It's a shame chatGPT fails so hard at them, otherwise I would be content to never use a regex again.
> Defendant (as dictated by AI): The Supreme Court ruled in Johnson v. Smith in 1978...
> Judge: There was no case Johnson v. Smith in 1978.
> Defendant (as dictated by AI): Yes, you are right, there was no case Johnson v. Smith in 1978.
That's hilarious, watch some of the trails on courtTV on youtube. The trials are are as culturally biased as you can get. And these ones are the ones we get to see. Judges are not some logical Spock! Free of influence, politics, and current group think. But people who think about their careers, public opinion and know who pays their paycheck. And these are the the competent ones.
I remember judge Judy proclaiming "If it doesn't make sense, it's not true!!!",
while screaming at some goy. This is pretty much the level of logic you can expect from a judge.
Maybe that's where we're headed. It looks like it's becoming more and more okay to just make things up to please your tribe. Why shouldn't that seep into the courtroom. I hope this doesn't happen.
Ironically, I recently saw a convo on Twitter where someone was showing off a ChatGPT generated legal argument, and, it had done exactly that, hallucinated a case to cite.
In the court itself there would definitely be no way to trust them right now, but I could see AI being a useful research tool for cases. It could find patterns and suggest cases for someone that is qualified to look further into. No idea how hard it is for lawyers to find relevant cases now, but seems like it could be a tough problem.
Yes, absolutely. As a senior software engineer, Copilot has been invaluable in helping me to do things faster. But having an expert human in the loop is key: someone has to know when the AI did it wrong.
What's so bad about this experiment isn't that they tried to use AI in law, it's that they tried to use AI without a knowledgeable human in the loop.
Yes it's true that LLMs hallucinate facts, but there are ways to control that. Despite the challenges they can spit out perfectly functional code to spec to boot. So for me it's not too much of a stretch to think that it'd do a reasonably good job at defending simple cases.
Yes, you'd do better, but you'd still have a LLM that is designed to predict the most likely next words. It would still hallucinate and invent case law, it would just be even harder for a non-lawyer to recognize the hallucination.
Eh, what if it was trained on all the previous cases ever to have existed? I think it could be pretty good, as long as it detects novelty as a to flag and confirm error case.
That's not the point. LLMs work by predicting what text to generate next. It doesn't work by choosing facts, it works by saying the thing that sounds the most appropriate. That's why it's so confidently wrong. No amount of training will eliminate this problem: it's an issue with the architecture of LLMs today.
You could layer another system on top of the LLM generations that attempts to look up cases referenced and discards the response if they don't exist, but that only solves that particular failure mode.
There are other kinds of failures that will be much harder to detect: arguments that sound right but are logically flawed, lost context due to inability to read body language and tone of voice, and lack of a coherent strategy, to name a few.
All of these things could theoretically be solved individually, but each would require new systems to be added which have their own new failure modes. At our current technological level the problem is intractable, even for seemingly simple cases like this one. A defendant is better off defending themselves with their own preparation than they are relying on modern AI in the heat of the moment.
It’s bizarre that anyone that supposedly works in technology even thinks this is realistic. This betrays a large lack of knowledge of technology and a child like understanding of the legal system.
It fails at determining if a number is prime and provides bogus arguments to such effect. You think it would make sense for this to argue complex legal cases with strategy? This isn’t Go or chess.
The AI is also _deceptively_ "right". For example, it will cite precedent that has since been superseded.
A non-lawyer representing themselves in a criminal case would overlook that, make a bad/wrong/misinformed argument, and go to jail.
In other fields, it'll lie to you about the thickness of steel pipe required to transport water at a certain pressure, it'll refer to programming libraries that don't exist, and it'll claim something impossible in one breath and happily explain it as fact in the next.
At the same time, all these cases are on the internet somewhere. It wouldn’t be too tricky to make a lawyer gpt that is heavily trained on existing legal documents, and is only allowed to quote verbatim from real sources.
They will pretty soon be able to fact check everything they say when they gain real time internet connectivity.
But for now yeah you're right. A year-two from now this won't be true anymore.
Being connected to the internet may make their information more up to date but they won’t stop being confidently incorrect. These language models are amazing at token prediction but it’s a mistake to believe that they’re actually intelligent.
Give it an inane prompt to better see the difference, like "write me an essay about camels and their value in cybersecurity" which gives us the answer
"The camel is a valuable animal in the cybersecurity field due to its ability to store large amounts of data in its hump. Camels are able to cross vast distances with little water, which makes them ideal for carrying large amounts of data across networks. They are also very sturdy animals, able to withstand harsh conditions and even attacks from cyber criminals. In addition, their long eyelashes protect their eyes from sand and dust, making them perfect for working in dusty environments such as data centers."
Well it did what you asked him to do , you ordered it to write an essay about camels and their value in cybersecurity. It doesn't understand if you're joking or not or what the purpose of the whole thing.
I asked it this:
Are camels important to cyber security?
Answer:
"No, camels are not typically considered to be important for cyber security. Camels are domesticated mammals that are well-adapted to desert environments and are often used as a means of transportation or for their milk, meat, and hides. Cyber security, on the other hand, is the practice of protecting computer systems, networks, and data from unauthorized access, use, disclosure, disruption, modification, or destruction. The two are not related."
A human would say that’s ridiculous and tell you why. Google would give you a link to Apache Camel. ChatGPT tells you about how great a camels humps are for data storage.
That’s preposterous. An intelligent agent, human, mineral, or otherwise, would respond that this is a ridiculous idea and ideally explain the reasons that such is the case. Imagine you are a student and you asked this amazing AI sone question of similar if mildly ridiculed and turn imagine the student didn’t already know the answer. Would you think this kind of response would be an example of an intelligent AI?
If it cannot deal with such things without being prompted in such a way that the prompter knows the answer already, how could it deal with complex legal situations with actually intelligent adversaries?
This is overly optimistic. For one, fact checking is much harder than you think it is. Aside from that, there are also many additional problems with AI legal representation, such as lack of body language cues, inability to formulate a coherent legal strategy, and bad logical leaps. We're nowhere near to solving those problems.
AI hallucinations are going to be the new database query injection. Saying that real time internet connected fact checking will solve that is every bit as naive as thinking the invention of higher level database abstractions like an ORM will solve trivially injectable code.
We can't even make live fact checking work with humans at the wheel. Legacy code bases are so prolific and terrible we're staring down the barrel of a second major industry crises for parsing dates past 2037, but sure LLM's are totally going to get implemented securely and updated regularly unlike all the other software in the world.
I'd also argue that "hallucination" is, at least in some form, pretty commonplace in courtrooms. Neither lawyers' nor judges' memories are foolproof and eyewitness studies show that humans don't even realise how much stuff their brain makes up on the spot to fill blanks. If nothing else, I expect AI to raise awareness for human flaws in the current system.
That the legal system has flaws isn't a good argument for allowing those flaws to become automated. If we're going to automate a task, we should expect it to better, not worse or just as bad (at this stage it would definitely be worse).
Good. Totally fine with trying to use AI to give legal advice, but it should be done with a lawyer’s license on the line. A company that explicitly disclaims being a law firm and states is not giving legal advice should also not get to tweet that they are “representing” someone in court.
A good bar (pun intended) for the quality of the tech is if it is good enough that a licensed attorney trusts it to give legal advice with their livelihood at stake. If this product doesn’t work for DoNotPay, they can just walk away and do something else, as they are doing anyways here. If it doesn’t work for a lawyer, they’d get sued for malpractice and possibly disbarred, ruining their career. When someone trusts it to that level, have at it.
No, bad. I may also disagree with some of the tactics DoNotPay used to represent themselves. But in a larger sense Lawyers cost money a lot of money. it's wonderful living in a time where the cost of filing suits is so low compared to defending against said suits.
AI lawyers can help lower those barriers to entry. the court in question is fucking traffic court. please lower the barrier to entry for defense and allow normal not rich folks to get on with their lives.
the costs involved in legal are so ridiculous, our society basically encourages blind folding ourselves to most business ethical standards in hopes that our product demands a high enough margin to then pay for what ever legal fuck ups that cost too much to figure out on the front end.
Do you imagine that a pro se litigant is going to have a hard time or will be required to spend small fortunes to defend themselves in "fucking traffic court?"
You're basically trying to cut the argument both ways. Administrative courts are not criminal courts, and an AI would never be allowed near a criminal defense trial for obvious reasons.
Yes because a lawyer costs their hourly rate whether it’s traffic or criminal court. Or more generally as an economy wide trend — paying a human to do something is often the most expensive route you can take.
We’re talking about traffic tickets that are usually in the hundred dollar range. If anyone is going to court rather than just paying it, it’s because $100 is a non-trivial amount of money to them.
It doesn’t have to be AI lawyers but any change to the system that reduces the total amount of work needed to be done by humans is a win.
>Lawyers cost money a lot of money. AI lawyers can help lower those barriers to entry.
For traffic court, perhaps, but these AI tools don’t seem guaranteed to not make this problem worse at broader scales.
These AIs will also be available to large firms, who are equally incentivized to use AI for augmenting argumentation through existing lawyers, but will also be incentivized to train their own walled garden powerful models in a way that poorer clients still would likely not have access to, and which individuals and smaller firms will not have the resources to train themselves.
These kinds of AI models could very easily serve to entrench and raise the cost of a defense by making it so you not only need a lawyer, you need a lawyer-backed-by-a-firm-with-a-LLM to be competitive at trial - making all existing problems even worse.
>walled garden powerful models in a way that poorer clients still would likely not have access to
Even if that becomes the case, this is some how worse than what we have now?
>raise the cost of a defense
Highly doubtful
> you need a lawyer-backed-by-a-firm-with-a-LLM to be competitive at trial - making all existing problems even worse.
First, most things don't go to trial so you're completely missing the cost savings associated with avoiding trial because of these types of AI assisted scenarios.
Second, the cost associated with training models will go down. The cost of a Harvard law degree has ... never gone down.
None of your objections are contradictory to requiring a real lawyer put their license on the line to vouch for the AI. Structural engineering jobs require a licensed PE to sign off on all designs. That doesn't mean they have to do all the work themselves, just supervise enough and trust their delegates enough to be confident the job is being done correctly. Licensed Electricians are responsible for all the work under their permits. Registered Nurses are largely responsible for all the LPNs working under them. All of these arrangements save money, but still require someone licensed to take responsibility. There is no reason AI couldn't be used in the same manner and save even more money, if done properly.
Yeah, I agree with basically everything you’ve said, but the standard of AI legal advice should be at least the standard of services required by current attorney regulations. They should probably be higher even. It is certainly likely that in the future, technology can provide that level of quality at a massive, extremely cost effective scale, but ChatGPT and DoNotPay is not that.
No direct complaint here, other than some president has to be set at some point.
Someone will have to take the initial risk. ChatGPT like AI may not be "the thing" but for some in this forum to be afraid of AI defense attorney is completely missing the forest through the trees.
> but it should be done with a lawyer’s license on the line.
Either that, or the client is fully aware they are defending themselves with the help of an AI that's not, and cannot at the moment be, a lawyer. As much as I want to believe AGI is just around the corner, LLMs are not individuals with human-level intelligence.
It's too bad programmers don't also have some sort of licensure as well. It would be helpful in keeping humans employed in creating and maintaining code, instead of letting AI run off with all our jobs.
It would help in having some sort of body that says, we want to have humans involved in the chain of responsibility when creating code and not willfully hand over the control to AI.
As the Opening Arguments podcast (one of the two hosts is a lawyer) said: If as a lawyer you do what was asked - just parrot what an AI tells you to parrot, you're going to get sanctioned and possibly disbarred. As a lawyer you are responsible for what you say and argue, and if you argue something that you know to be false, you're in violation of the ethics standards; just about every bar association lists that as sanctionable, or even disbarrable, offense *.
Thus, effectively, the only thing you could do is a watered down concept of the idea: A lawyer that will parrot the ChatGPT answer, but only if said answer is something they would plausibly argue themselves. They'd have to rewrite or disregard anything ChatGPT told them to say that they don't think is solid argument.
They also run a segment where a non-lawyer takes a bar exam. Recently they've also asked the bar exam question to ChatGPT as well. So far ChatGPT got it wrong every time. For example, it doesn't take into account that multiple answers can be correct, in which case you have to pick the most specific answer available. Leading to a somewhat hilarious scenario where ChatGPT picks an answer and then defends its choice in a way that seems to indicate why the answer it picked is obviously the wrong answer.
*) Of course, Alan Dershowitz is now arguing in court that the seditious horse manure he signed his name under and which is now leading to him being sanctioned or possibly disbarred, is not appropriate because he's old and didn't know what he was doing. It's Dershowitz so who knows, but I'm guessing the court is not going to take his argument seriously. In the odd chance that they do, I guess you can just say whatever you want and not be responsible for it, which... would be weird.
The first wave of ChatGPT stories were all "amazing, wonderful, humanity is basically obsolete". But now that people have had a little time, I'm seeing a ton of examples where people are realizing that ChatGPT sounds like it knows what it's talking about, but actually doesn't know anything.
We know that humans are easily fooled by glib confidence; we can all think of politicians who have succeeded that way. But it sounds like ChatGPT's real innovation is something that produces synthetic bullshit. And here I'm using Frankfurt's definition, "speech intended to persuade without regard for truth": https://en.wikipedia.org/wiki/On_Bullshit
ChatGPT does some really impressive stuff! As a creative tool or a code generator it can give you some good material to work with.
But the hype has been insane, totally untethered from reality. I think it's the instinct to assume that something which can mimic human speech quite well must also have some mechanism for comprehending the meaning of the words it's generating, when it really doesn't.
True. And the really interesting question for me is how much those people are also failing to notice this distinction not just with ChatGPT, but with other people. Which sounds absurd at first, but long ago Oliver Sacks gave an example that sticks with me: https://www.junkfoodforthought.com/long/Sacks_Reagan.htm
Your footnote: A judge should take that argument seriously, and therefore disbar him because he's old and self-admittedly doesn't know what he's doing.
The HMCTS held a hackathon for the future tech in the UK court system a few years ago. The judges were people like the CEO of the courts, they also had lord chief justice. There were all sorts of firms like Linklaters, Pinsent Mason and Deloitte. We won with a simple Alexa lawyer that was to help poor rental tenants. It generated documents to send a landlord and possible legal advice. The idea was specifically for people who can not afford a lawyer. There was a lot of influential people who were very excited about this space, so it is strange when it actually gets implemented it's not allowed.
I wonder what the wider implications are for the legal system. Will there be less qualified human lawyers in the future due to the lack of junior roles that are filled by AI? Will lawyers be allowed to use AI to find different ways of looking at issues?
Apart from being different jurisdictions, they are different issues. The situation in the article involved a pro se litigant feeding courtroom proceedings to the AI and regurgitating its responses in real time. In that situation you are effectively handing your agency over to the AI. You can't really be said to be representing yourself in any real sense; you are mindlessly parroting what is fed to you.
The situation you describe seems to be more akin to an advanced search or information portal that people can use to guide their self-representation, or even their decision to engage lawyers/discussions with their lawyers (of course, maybe I'm misunderstanding). That stuff has basically always been allowed; nobody is threatening to prosecute Google because pro se litigants use it in their research. There are plenty of websites out there that discuss tenants' rights. There are even template tenancy agreements available online for free.
Also, what were you proposing to use as the knowledge base for your Alexa lawyer? Were you really planning on using ChatGPT or some other general purpose AI? Or would the knowledge base be carefully curated by qualified professionals? And who would create and maintain it, the state? A regulated firm? Or a startup with a name like "DoNotPay"?
Really good thoughts and treatment of the different issues. The line between “tool” and “agent” is blurry and will probably just keep getting blurrier. But I do think it’s important for our judicial system to ensure that any delegation of representation is to a very qualified third party, for both ethical and process/cost reasons.
I’m not sure the startup’s name is especially germane though. If anything, it seems to fit right in with human lawyers like 1-800-BEAT-DUI.
Privacy seems like it would be a major issue. As a litigant, I would not want the opposing side piping my case information to a third party and having this information used to train the AI for future cases.
AI could be very useful for helping pro-so litigants prepare documents. I imagine with this use case as well as the oral argument use case, judges are also worried about low quality output wasting the court's time.
Self-representation is frowned upon, "a person with themselves for a client has a fool for a client." But, where in the UK system is it disallowed, unless you are a repeated, "freeman of the land" nonsense spouter?
Off the top of my head: depending on the jurisdiction there are several important legal documents that you are disalllowed from filing yourself and the judge is allowed to reject you with or without reason
Aside from the question of whether this plan was legal, DoNotPay seems like a terrible product. The results it generates seem laughably bad, and it’s questionable whether “AI” is actually involved when it takes them literal days to generate a document for certain types of requests.
https://www.techdirt.com/2023/01/24/the-worlds-first-robot-l...
Indeed - and this is not new. Many years ago, I took a look to see what all the fuss was all about.
From start to end, he/his product seemed amateurish. From giving out a herokuapp.com subdomain in early press releases (which were republished on major sites), that was then no longer in use (allowing it to be taken over), through to the actual generated output.
When I looked at a letter it generated, it was laughable. The "chat bot" was simply asking questions, and capturing the (verbatim) response, and throwing it straight into the template. No sanity checking, no consistency, etc. There was absolutely no conversational ability in the "chat bot" - it was the equivalent of a "my first program" Hello World app, asking you your name, then greeting you by name.
It wasn't capable of chat, conversation, or comprehension. Anything you entered was blindly copied out into the resulting letter. Seems nothing has changed.
Most of these things seem to be hybrids, humans overseeing automation, with varying degrees of human involvement. Guessing they at least have a review queue for non-boilerplate docs about to go out.
ChatGPT is at capacity right now.
Get notified when we're back.
Write an acrostic poem about the status of ChatGPT.
C: ChatGPT is currently down.
H: Huge demand has caused the site to crash.
A: All users will have to wait.
T: Time is needed for the servers to catch up.
G: Go grab a coffee and check back soon.
P: Patience is key in this situation.
T: Trust that the team is working hard to fix it up.
I don't see the problem as long as the actual lawyer can intervene when necessary.
If ChatGPT did something wrong, that lawyer would still be on the hook for deciding to continue using this tool so responsibility/liability/authenticity is not a problem.
I get that they want to make some kind of subscription service to replace lawyers with AI (a terribly dystopian idea in my opinion, as only the rich would then have access to actual lawyers) but just like Tesla needs someone in the drivers seat to overrule the occasional phantom breaking and swerving, you need an actual lawyer for your proof of concept cases if you're going to go AI in a new area.
You'd also need a fast typist to feed the record into ChatGPT of course, because you can't just record lawsuits, but anyone with a steno keyboard should be able to keep up with a court room.
> a terribly dystopian idea in my opinion, as only the rich would then have access to actual lawyers
The rich don't go to jail already. The crypto scammer paid a huge bail and got out on his private jet. That, to me, is far more dystopian than a cheap tool to help people appeal traffic tickets.
You're conflating a few different things. Being able to pay bail so you don't have to be in jail while you wait for your trial doesn't get you out of having a trial, and has nothing to do with needing lawyers.
What the parent was referring to is the fact that if AI starts to consume the low-end (starting with traffic tickets), actual lawyers for trials will become even more expensive, and thus poorer people will actually fare worse because they will lose their already-limited access to human lawyers. Yes, their case might get handled with less hassle and cheaper, but the quality of the service is not -better-, it's just cheaper/easier.
Or maybe we only end up using lawyers when they're actually needed, and they become less costly for things like criminal trials. Think on the doctor whose routine cold and flu visits are replaced by an AI. Now they have a lot more time and bandwidth to handle patients who actually need physician care.
We can't just assume it's going to go the worst way. Neither outcome is particularly more likely, and the human element is by far the most unpredictable.
To wit: I was listening to a report yesterday on NPR about concierge primary care physicians. The MD they were interviewing was declining going that direction because they saw being a doctor as part duty and felt concierge medicine went against that.
It seems to me you're the only one conflating things? Grandparent didn't say anything about getting out of having a trial, or about needing lawyers. They're talking about how people with money can use it to avoid spending time in jail, and gave a perfectly valid example of someone rich doing exactly that.
That is pretty bad example. In theory, bail should be affordable to the individual person. It is meant to be insurance that you come back for actual court date.
The outrage there is bails being set to unaffordable sizes for poor people. OP was picking out the case where bail functioned as intended.
If that's the case, why involve money in it? About a third of people who are arrested cannot afford bail, while if you are rich (maybe through crime), you can pay it. Of course bail is a mechanism for differential treatment between rich and poor in the judicial system.
Right, the UK generally doesn't have cash bail, and the most recent noteworthy example where cash bail was used (Julian Assange) the accused did not in fact surrender and those who stumped up the money for bail lost their money, suggesting it's just a way for people with means to avoid justice.
The overwhelming majority of cases bailed in the UK surrender exactly as expected, even in cases where they know they are likely to receive a custodial sentence. Where people don't surrender I've been to hearings for those people and they're almost invariably incompetent rather than seriously trying to evade the law. Like, you were set bail for Tuesday afternoon, you don't show up, Wednesday morning the cops get your name, they go to your mum's house, you're asleep in bed because you thought it was next Tuesday. Idiots, but hardly a great danger to society. The penalty for not turning up is they spend however long in the cells until the court gets around to them, so still better than the US system but decidedly less convenient than if they'd actually turned up as requested.
I am not defending bail as a system. However, the system in USA relies on it. The complaint here was not that poor people stay in jail. The complain was purely about someone being able to pay bail.
> About a third of people who are arrested cannot afford bail, while if you are rich (maybe through crime), you can pay it.
This means 2/3 of arrested people can afford bail or are released without it. A case of single rich person having affordable bail is not exactly proof of inequality here. Poor people who had low enough bail they were able to pay do exist too.
Every time I see this "traffic ticket" thing, it usually looks like
1) The driver was actually speeding
and
2) The driver is trying to get off on a technicality
Is that the case?
In the US do you get "points" on your driving license so that if you are caught speeding several times in the space of a couple of years you get banned?
In the UK being caught mildly speeding (say doing 60 in a 50), in the course of 3 years it's typically
Same in the UK, you have to get lifts or taxis everywhere, unless you live in big cities (London has great public transport, but so does New York. The weekly bus that my sister gets doesn't really help her to travel to work as dozens of different schools all over the county)
It's a very good reason not to speed.
So it's just a fine that Americans get for speeding?
Are fines at least proportional to wealth? Or can a rich people speed without problem as saving 10 minutes on their journey is worth the $100 fine even if it was guaranteed they got one?
(In the UK speed is almost entirely enforced by cameras, not by police cars which are rarely seen on roads. Removes any bias the cop might have -- maybe the cop has it in for young tesla drivers so pulls them over, but lets the old guy in a pickup go past)
> So it's just a fine that Americans get for speeding?
Well, things vary from state to state. But there is definitely a point system in place for excessive speeding, speeding in a school zone, passing a school bus at any speed, stuff like that. In a lot of places you can be arrested for reckless driving, with varying levels of what defines "reckless." Virginia is notorious for their speeding laws. Speeding in excess of 20mph of the posted limit or in excess of 80mph regardless of limit (e.g. 81 in a 65) is what they consider reckless driving and it's a misdemeanor that could potentially (but not likely) give you a one year jail sentence.
In the US, a small percentage of people drive completely insane.
If going 75mph on a highway that is 65mph or 70mph someone will fly past going a 100mph.
Those are the people that get tickets. Otherwise, it is pretty difficult to even get pulled over.
I have only been pulled over twice in my life and not in 20 years. I think police departments have cut back quite a bit on police trying to rack up traffic tickets.
The fine is not the issue. The whole process is a massive waste of your time.
No, fines are not proportional to wealth (at least in most states). They're either flat fees or pegged to speed. Points on license as well, so ~3 tickets inside a year and you lose your license or have to take a course to keep it.
Most tickets are given by live officers. Cameras do exist, but typically only in dense urban areas. Which opens another can of worms, as police are biased.
We also have lists of secondary offenses the officer can cite only after citing you for speeding (or some other primary offense). Things like a failed light bulb, or some other minor safety issue. These are disproportionately used against PoC.
In the US you get a fine and you get points against you. Points cause your auto insurance to go up. And too many results in restricted or a suspended license. Which doesn't prevent people from driving but usually causes them to drive very conservatively so as not to get caught.
And the parent is correct the much of the US is set up for people to drive so much so that being draconian isn't practical. And it's something to keep in mind that any given individual didn't decide that's how the place they live in is setup.
The US is similar, but we also have other dynamics. Some municipalities rely on traffic tickets for revenue, so they have a perverse incentive to create more infractions. Notable examples are automated ticketing at red light leading to shorter yellows[0], and speed traps where a small town on a highway sets unreasonably low speed limits[1].
For traffic tickets, it is often possible to go to court and have the judge offer a reduced fine for pleading guilty or no contest (you don't admit guilt, but you don't plead innocence and accept the punishment).
Most people, when they want to fight such tickets, think they can argue their way out of it. Whereas the judge and officers simply just want to get the hearings over with. They do hundreds of such hearings a week and have heard it all before. So, the judge will tell the courtroom that they can get a reduction and how to get it. Sadly, the defendants are anxious, have been mentally preparing themselves for a fight, and are in an unfamiliar environment so they tend to get tunnel vision and choose to plead 'not guilty'. They inevitably loose.
If you ever find yourself in such a situation, pay close attention to what the judge offers everyone before the hearings begin. If they don't offer such a bargain, when it is your time to appear before the judge you can ask "would the court consider a reduction in exchange for a plea of no contest?" It doesn't hurt to ask.
True as far as it goes, but bear in mind that a huge proportion of the population speeds routinely. So enforcing the law doesn't feel equitable; the rule of law already doesn't exist on the roads.
I could run the actual statistics for my district’s Court and tell you the percentage of reoffending while on bond, but in my experience a number approaching 40%, of people out on felony bonds, are re-arrested for additional felony conduct.
Well, because there are now e.g. two cases of murder instead of only one, and the second one was entirely preventable? Oh, right, the law is not about the populace's lives and well-being, how silly of me to assume that.
You didn't answer my questions. Do you think it would decrease the odds of reoffending if they didn't get bail?
If it doesn't, then it's two crimes either way, just timed differently. And after the policy settled in it would have negligible impact on the crime rate.
Also if you're doing sentencing for two crimes at once you can give a longer punishment for the first crime and get them off the street longer.
Yes, it would decrease the odds, provided that after walking out of jail the probability of a criminal to commit a crime (that includes his intent to reoffend but also possible changes in the environment that happened during his time in jail that reduce his possibilities to do so) is less than before walking in, because they wouldn't get the chance to reoffend immediately.
I don't assume that someone coming out of a sentence is less likely to commit a crime. Isn't it often the opposite, because the US is so bad at rehabilitation?
Then you should argue for "shoot at sight" or "lifetime sentences/electric chair for everything", shouldn't you?
But no, the probability does decreases since not every ex-jailed becomes a repeated offender; also, some people die during the sentence... the probability does go down, for many small reasons compounded together.
> Then you should argue for "shoot at sight" or "lifetime sentences/electric chair for everything", shouldn't you?
Not unless I'm a robot programmed to prevent recidivism at all costs. Why are you even asking this?
> But no, the probability does decreases since not every ex-jailed becomes a repeated offender
...and not everyone released before their trial becomes a repeat offender.
> some people die during the sentence
Is that a significant effect? I don't think most sentences are long enough for that to make a big difference, and I don't think preventing a single crime per lifetime, at most, is enough reason to keep people locked up for the lengthy pre-trial process.
I assume that the original comment was about how someone can get could pretty much red-handed for battery and assault/home violence but then instead of getting put into pre-trial/provisional detention (yes, you can get locked up even before being judged guilty, outrageous), they are just allowed to go because eh, why bother.
> I don't see the problem as long as the actual lawyer can intervene when necessary.
There was no actual lawyer, they planned to do it without notifying the judge, having a defendant “represent themselves” with a hidden earpiece. They'd already issued an AI-drafted subpoena to the citing officer (which is almost certainly a blunder aside from any rule violations; officers not showing when a ticket is scheduled for court is one of the main reasons people win ticket contests, there is almost never a reason the defense would want to assure their appearance.)
The problem is that ChatGPT makes up convincing sounding case law, reference court rules that either don't exist or don't say what the summary says, reference the same issues with legislation, and do so with 100% confidence of the truth of these statements. ChatGPT is good for discovering potential arguments, and summarizing existing ones, but it's not there yet for the fidelity necessary for legal practice.
It's not all gloom. ChatGPT is pretty decent at writing pleadings and legal argument when you feed it the necessary bits though.
>But if we could, would it be entitled to a jury of its peers (other language models)?
How does the jury find?
Finding is a complex task that involves many different type of reasoning in order to reach a conclusion. There is no specific way we find.
How does the jury find?
We find the defendant guilty.
Your Honor, the defense hereby requests—credits permitting—that the jury be polled ten thousand times each in order to draw the appropriate statistical conclusions in aggregate.
I don't think they will replace lawyers any time soon. Paralegals.. Maybe. I've worked with legal software before, and with just a little bit more smarts, the software could do a lot of grunt work.
>It's not all gloom. ChatGPT is pretty decent at writing pleadings and legal argument when you feed it the necessary bits though.
I'm a lawyer working on a case involving neural networks. So I've been playing around with ChatGPT (for fun, its not involved in my case at all--the NN is a much different context) and trying to get it to do stuff like that. Maybe I'm not using the full feature set (does it have better APIs not accessible on the main page?) but it doesn't seem even close to being able to write pleadings or arguments.
It's surprising good at summarizing things that you might including pleadings or arguments though. But even then its got a 1/3 chance of fucking up massively.
But it's way more advanced than I imagined it would be. Very impressive technology.
Generally to get the most utility from ChatGPT, you need to feed in a bunch of context. E.g. for legal stuff, feed in each relevant case, the facts of your own case, perhaps an example pleading or two to show the general format you want and then ask it to produce what you're looking for.
>The problem is that ChatGPT makes up convincing sounding case law, reference court rules that either don't exist or don't say what the summary says,
I look forward to it snarkily telling me that the CFAA was "wRiTtEn In BlOoD" and then post-hoc editing its comment with links it Googled up that have titles supporting its point and bodies that contradict.
It feels like with a little more tuning so as not to be misleading this stuff is on the verge of being useful. In the meantime I'll get some popcorn and enjoy spectating the comment wars between chat(gpt)bots and the subset of HN commenters who formerly had a local monopoly on such behavior.
Is this using chatGPT or gpt3 though? There's a big difference, a fine-tuned llm can do leaps and bounds better than chatGPT.
I'm thinking a good SaaS might just be train localized llm's for every city, state, county law and partition the lm based on where it can seek info, then just use it as one big search engine, and of course work in citations, etc.
Take a look at https://www.legalquestions.help/ which gets the above issues wrong at times enough to not rely on it. Maybe their training was bad or insufficient (and I fully expect that sometime in the future this not to be the case).
If you're paying by the hour, and you go to the lawyer with all the data the ai gives you, they can have a paralegal fact check it, and get back to you, but we're at the early stages, things only get better from here on out. Creating some sort of fact algorithm to go w/ gpt3 seems like the next big thing to me. If you can hold it accountable to only give facts, except when an opinion or 'idea' is sought after, which is more ethereal, then you can get some amazing things. Law and even medicine diagnosis will probably be way easier for it than coding, even though it's pretty remarkable on that front already.
> If they just wanted to show the world their product was viable, why didn't they pay for a real lawyer who's down to their luck to read out the crap ChatGPT was spewing out so there wouldn't be any legal gray area?
They tried that, but swung for the fences for publicity: they had a $1,000,000 offer to any attorney with a case pending before the Supreme Court to use it for oral argument.
Up until they abandoned the whole robot lawyer idea, that offer was open but apparently got no takers.
> You'd also need a fast typist to feed the record into ChatGPT of course, because you can't just record lawsuits,
You also generally can’t use an earpiece to get a feed in the courtroom, either.
In the the construction “they tried that, but... ”, the part after “but”, if it identifies an action by the actor and not an outcome, identifies a departure from what is described by “that”. So, I'm not sure what you are arguing against here.
> The supreme court idea was never going happen and they knew it.
I don’t think they knew it, just as I don’t think the bknew the traffic court thing was also not going to happen, or the problems with their whole suite of legal (“sue in small claims court”, child custody, divorce) assistance supposedly-AI products were problenatic. I think they jist took the path of boldly striding into a domain they didn’t understand but somehow thought that they could market “AI” for, and ran into unpleasant reality on multiple fronts, forcing not only their scheduled traffic court demo and their hoped-for Supreme Court demo to fail to materialize, but also several of their already-available legal-aid products to be pulled, so that they would focus exclusively on consumer assistance products without as much legal sensitivity.
> In the the construction “they tried that, but... ”, the part after “but”, if it identifies an action by the actor and not an outcome, identifies a departure from what is described by “that”. So, I'm not sure what you are arguing against here.
I'm saying the departure is so big that it doesn't make sense to frame it as even a partial solution to the idea.
> I don’t think they knew it, just as I don’t think the bknew the traffic court thing was also not going to happen, or the problems with their whole suite of legal (“sue in small claims court”, child custody, divorce) assistance supposedly-AI products were problenatic.
The combination of supreme court cases being so narrow, the interrogation being so harsh, the tech allowed in being carefully restricted, and the stakes being so high makes me think they would understand the gap between that demo and "find a guy with a parking ticket who happens to be a lawyer".
You can't use electronic devices at the supreme court and the consequences for the lawyer for doing so (plus the effects of being questioned by supreme court judges on small details of case law) would probably be pretty dire
> I get that they want to make some kind of subscription service to replace lawyers with AI (a terribly dystopian idea in my opinion, as only the rich would then have access to actual lawyers)
What? Doesn't make any sense. The opposite would happen, real lawyers would become cheaper because more competition. That is exactly what these luddites are fighting against.
If something wrong happens and the lawyer is officially responsible, the onus is still on you to make your claim, likely in court.
I'm dealing with a similar issue where an expert is indeed wrong and indeed responsible for his mistake, but the wronged party needs to spend a lot of money proving that they got wrong advice in front of the courts. The wronged party does not speak the local language (as is common in Europe), so that's unlikely to happen.
There's a huge gap between being technically right, and seeing justice.
> I don't see the problem as long as the actual lawyer can intervene when necessary.
I don't believe there was an actual lawyer here:
> The person challenging a speeding ticket would wear smart glasses that both record court proceedings and dictate responses into the defendant's ear from a small speaker.
I really doubt that's why. Court proceedings are a matter of public record. You can just buy magazines full of names and mugshots of people who gave been arrested. I want recordings to keep power hungry judges in check.
It is two completely different things to release the records while the trial is going on and release the records once the trial is decided. The latter is completely OK and wanted, the former is undesirable.
Honestly I think this guy was super clever. It was abundantly clear to anyone thinking about this that there was no way this ploy would work. But he got pretty big on Twitter, is getting all of this press, and has now built up awareness of his startup which is doing a far saner and less ambitious task incredible publicity which otherwise he would've had a hard time getting.
> Leah Wilson, the State Bar of California's executive director, told NPR that there has been a recent surge in poor-quality legal representation that has emerged to fill a void in affordable legal advice.
> "In 2023, we are seeing well-funded, unregulated providers rushing into the market for low-cost legal representation, raising questions again about whether and how these services should be regulated," Wilson said.
Got it. There are not enough affordable legal services in the US, and so the Bar's solution is to regulate them away.
That's a really funny use of the word, considering that the whole purpose of what they're trying to do is the opposite of exploit those of lesser means.
If you can explain to me how what they're trying to do is going to is by definition of the word "exploit" then I will change my mind, otherwise otherwise I will continue to think you have no idea how ironic it is for someone offering an equivalent service or something that will be the equivalent service for a much lower cost as "exploiting someone."
It seems like the legal industry doesn't like competition.
AI is able to sometimes make a valid argument, but when it comes to specific facts and rules it drops the ball. Expert knowledge requires actual understanding, not fitting patterns and transposing words. Take a look at the following vid from a real expert in a paticular field (military submarines). Look at how ChatGPT falls appart when discussing "Akula" subs. It can read english but clearly does not understand what that word means in context. It also confidently cites incorrect facts, something that would be very dangerous in any court.
Hint: Akula is a nato reporting name. Nobody calls them the shark class of subs, even russians attach that name to a very different class.
Absolutely atrocious stunt on the part of that company. A glorified chatbot is not itself legally accountable or trained lawyer and it cannot seriously represent anyone. I assume the entire purpose of this was to bait the obvious shutdown and then complain on the internet about the legacy lawyers or whatever to generate press. Reminds me of the 'sentient AI' Google guy.
Is this going to be the new grift in the industry?
I'm not familiar just how large cases it's supposed to tackle, but for small stuff like parking tickets a few solid legal arguments may be enough and you don't need to pay for tens of hours of lawyer's time.
An AI lawyer, operating within reasonable bounds, could absolutely be an asset to criminal defendants and parties to civil litigation. You could reduce a basic discovery request to mad libs. I'd go so far as to say you could do the same with motions for summary judgment, requesting depositions, and other things. They wouldn't be optimal, but they wouldn't be of a level worthy of sanctions. It's just protectionism from the private bar who doesn't want to lose easy billables, and fear from prosecutors, creditors, and the like who realize that their system would collapse if half their opposition could force them to do some real lawyering a time or two. If every criminal defendant and debtor could squeeze three hours of drafting documents and individualized courtroom attention out of opposing counsel, it wouldn't be the guy using the AI coming out worse than the status quo. You can argue that the consequences would be negative for society, but it's laughable to say they'd be negative for litigants. DCS can barely knock down scarecrows; a few mediocre pleadings that demand a response from child support obligees and other parties would send them crying into early retirement.
"Unauthorized practice of law" only applies to people, not tools. AI is a tool. DoNotPay was not selling legal advice, only a tool to understand law. It is no different if they were selling a code book, or other text that the defendant uses himself. I think the real fear is that AI will supplant the entire legal profession.
The legal profession went through a similar struggle when Nolo published software that could draft basic legal documents by filling in the blanks. Nolo won.
You are making the assumption that a company advertising a robo lawyer isn't engaged in unauthorized practice of law, which is rather odd since I bet the Nolo books don't hold themselves out as a lawyer, robo or otherwise.
You are also making the assumption any tool is allowed in a courtroom, which is obviously not correct. You wouldn't be able to use a nolo book while testifying what you witnessed at a crime scene, either.
"The justice system moves swiftly now that they've abolished all lawyers!"
-- Doc Brown in 2015, Back to the Future
I look forward to the day when cases are argued on both sides by an AI to an AI judge. It should work about as well as Google customer service!
But seriously, having the AI do the arguing is silly. AI should be a tool. I see no issue using an AI to inform a lawyer who can use what it outputs to make their case stronger, but just using an AI seems fraught with peril.
Do what you'd have to do if this were say a medical device: hire a retired judge or two and set up double-blind fake trials with AI or human representation. Prove it works, then try it with real people.
Every person should still have the right to be defended by a human lawyer yet the right to voluntarily choose an AI lawyer to either defend you or just hint you as you defend yourself would be great to have. It may totally change the game where (currently) whoever can afford expensiwe lawyers generally wins and whoever can't automatically looses exorbitant sums of money. Real lawyers will never let this happen.
Well, the comments about the company not doing things correctly (licensing the algorithm), are correct.
It's actually critically important to have some kind of license to represent people in court, as well as someone to pillory, if they screw up, as it prevents some truly evil stuff from happening (I have seen many people robbed blind by licensed lawyers, and it would be a thousand times worse, if they could be represented by anyone that sounds convincing enough). The stakes are really high, and we shouldn't mess around (not in all cases, of course, but it would really suck, if someone got the needle, because a programmer forgot a semicolon).
That said, I think it's only a matter of time, before a significant amount of legal stuff is handled by AI. AI shines, in environments with lots of structure and rules; which pretty much defines the law.
I'd rather have judges at the lower levels rely on some AI assistance. The level of utter incompetence that I've witnessed personally has been hard to comprehend.
The fact that a guild controls the legal system has always been alarming to me. Its very much in their interests to make it impossible to avoid spending huge amount on their services and reduce supply by making it hard for more people to become members.
Lawyers will probably be the last profession to be automated.
This whole thing was clearly a marketing stunt, they knew from the beginning they wouldn't be able to do it but they got a ton of free publicity out of it.
Only lawyers can give legal advice, big difference.
You can represent yourself in court if you want to (generally, you don't) but if you want to offer that service to others, you need to be a licensed lawyer. It is the same for many professions.
You are saying exactly what I'm saying, only with different words. As life goes on, this is a trend I get more and more acutely aware of: people bias, opinion and values make them re-interpret what others say to make it fit in their already made up mind about what the conversation is about.
I was saying that not knowing the law is not a defense yet the laws are so complex that an expensive expert that dedicated all their study to the law is practically (as in, yes, you have the option to not use a lawyer) required in court.
It is not the same for many profession. In no other profession are non-experts expected and required to know the matter.
There's two separate issues at play here. On the left hand, it's true that ChatGPT, while impressive, on its current incarnation sometimes returns correct responses, and some other times it returns seemingly correct hallucinations. Unless the accuracy and certainty is not over a certain threshold, say 95% for example, I don't think it would be safe to use it for critical use cases, like acting as a lawyer, as it very well might hallucinate laws or prior cases, etc.
On the right hand, lawyers see the writing on the wall and see AI as a threat to their really lucrative business, and they'll use any means at their disposal to outlaw it or slow the adoption of AI technologies to replace lawyers.
Hell, I'm a software engineer and I see the writing on the wall too, and I see AI as a threat as well. I also acknowledge the limitless opportunities. I'm at equal parts excited and terrified by what's coming.
When I first heard of DoNotPay, I was honestly impressed by the idea of having an AI fighting cases in court (simple cases that is). But after a few minutes or so when I actually started contemplating the reality, my impression about it got dimmer and dimmer. In my honest opinion, I really don't think it is necessary that AI should be introduced in court systems especially to fight cases. There might be other implementations and other problems for it to solve but not for this. So, I don't disagree with the CEO saying that "court laws are outdated and they need to embrace future that is AI." But embracing can be done in other ways than this for instance I did read about a startup that uses AI to read law related documents or something similar to that don't remember its name though. That was quite interesting as well!
AI might one day take over the legal profession, but it won't be language-learning models that do it.
An AI that can replace lawyers would have to be able to make knowledge-based inferences, based on actually conceptually understanding what it is reading, and saying. They have to be able to identify the specific facts and circumstances that govern the matter at hand, not the general conditions that would normally apply, since cases are won on the specifics.
We're at least 2 decades away from that kind of AI, because AI research today is currently stuck in a local maxima of statistics-based brute force machine learning that doesn't actually lead to models that have any sort of intelligence about what they learn.
Has anyone who has covered this actually used DoNotPay?
It's just such a poor product. I'm biased towards being pro-AI lawyer, but there's no reason to think that an app that can't execute the basics will push the technological frontier of the legal field.
Unfortunately, the existence of the law cartel does not make this ok. The solution is to break down the law cartel so that there are more mediocre bar members that can facilitate the AI argument being made.
You can represent yourself (where applicable) or you can have representation. That representation is an officer of the court and must adhere to professional standards to maintain their license and bar membership. Even if you could force an algorithm to adhere to professional standards, it’s unlikely that it could be legitimately considered an officer of the court any time soon.
Their mistake was using this tool in a criminal case. They could have rolled it out for arbitrations and/or mediations, proceedings which do not necessarily require legal counsel.
The legal system has various functions. One of these is determining the facts. Think of it as agreeing on what prompt to give the AI. Once the facts are determined, everything else pretty much follows.
If Fact A then Consequence B. If the parties can agree on the facts, AI will tell them the likely consequences. But as a fact-finding tool, in present form AI is not useful.
Rabbit trail: What is the state of non-ai tech to assist in building or defending cases? I vaguely know a thing called LexusNexus exists, but not what it does. Is there a system where a lawyer can search, find related law, then fill out a form to generate a draft case or defense? It seems to my un-lawyer self the legal system could be codified into a rules engine: IF these legal inputs, THEN these outputs. Or reverse: IF these desired outputs, THEN these inputs need to be met.
Lexis is an information aggregation company. They take a large quantity of US law and Court opinions and publish them. These sources are then linked together using a relatively simple tag style system.
I’m general you can get forms for very specific and predictable case types, but for a large portion of practice, outside of initial filings the fact-specific nature of subsequent pleadings are harder to formalize.
I wonder why he didn't continue to apply pressure to go ahead, and worse case scenario just flee to europe if in-fact angry prosecutors actually tried to jail him.
Ironically, the outcome of this whole saga is the most lawyer outcome it could've been... by way of Lawyers advocating to keep legal protection out of reach from the common man and inserting themselves between real innovation and progress for financial gain.
> As word got out, an uneasy buzz began to swirl among various state bar officials, according to Browder. He says angry letters began to pour in.
So an organization with the sole purpose of gatekeeping and anti-competitive market control does the anti-competitive market control. Why is this bar idea itself still legal?
You're not going to get a white shoe law firm partner to come to court to contest your parking ticket unless you are that lawyer yourself, so barely passing is good enough. The only thing to worry about is if the exam grading doesn't allow negative marks for disbarment-level answers, that barely passing score might not be giving us enough information.
>You're not going to get a white shoe law firm partner to come to court to contest your parking ticket unless you are that lawyer yourself, so barely passing is good enough
But what if that white shoe law firm is using chatGPT to respond to your emails and gather information about your case and then billing you? If they use it, does that make it okay?
AI lawyers indeed look scary, but I can't but think of the implications should they become one day way cheaper than carbon based ones. Imagine all those cases where $BIGCORP is technically in the wrong, but today can screw some poor sap just by dragging things until he can't afford legal defense anymore.
I'm guessing that this tech won't end up creating equality for poor people in the justice system.
Either it won't work as well as actual lawyers in which case it will become the only option poor people have (basically replacing public defenders), or it will work just as well as human representation (or even better), in which case 'AI lawyer' companies will charge just as much if not more for their services as the human lawyers do.
DoNotPay may be a humble start up now, but if the tech proves to be effective they (and other future AI companies) will eventually fleece their customers just as human lawyers do today. Not because they have to, or because it is in any way justified, but just because they can and doing it would make them richer.
Open Source and disconnected. AI is a wonderful tool that will turn into a nightmare when used to take advantage of people. I wouldn't want my "pocket lawyer" to send our private conversations anywhere but my personal offline space. The risk that someone with vested interests and enough funds could bribe their access to that data, if not directly to the AI, is simply too high. Unfortunately it seems that trustworthy AI is not going to happen anytime soon: there's simply too much money to make renting it as a service, hence online and closed.
The $BIGCORP can use the tool too, it's not exactly asymmetric. Imagine what a patent troll could accomplish with something like this if it were allowed.
Lawyer Bar Associations trying their best to starve off the inevitable.
If only ChatGPT was trained with case law and law school texts. Then when they sue, the ChatGPT model can defend itself.
I'm affected by this too, but watching lawyers be rendered obsolete makes me very excited.
You're ignoring the fact that ChatGPT isn't trained to be correct or logical, it's trained to be semantically understandable and coherent. Which is an absolutely terrible model to rely on in a court room.
Mate I'm not stupid. It was a proof of concept and even if the end result was going to be a spectacular failure, it doesn't exclude the fact that the bar associations are desperately trying to fight this tooth and nail.
Learn to read between the lines, not everything is an IDE.
My point is you're ignoring the fact that the bar association has a distinct interest in having the courts run smoothly and making sure lawyers in court are competent. So it is highly in their interest for a courtroom to not become a mockery because of a chatbot.
Not everything associations like the bar does is in bad faith or contrary to the public's interest.
The court system's smoothness or lack thereof is the responsibility of the country's judiciary IE the government. The bar associations for each state in contrast:
-Licenses attorneys and regulates the profession and practice of law in their -respective states
-Enforces Rules of Professional Conduct for attorneys
-Disciplines attorneys who violate rules and laws
-Administers the California Bar Exam
-Advances access to justice
-Promotes diversity and inclusion in the legal system
EG they're trade unions and they're going to become the luddites of the 21st century.
I saw the ceo of this company offering a million dollars to anyone willing to use their AI in a US Supreme court case (I'd be surprised if that tweet was still up).
Safe to say that even if they had a solid product they are being recklessly Gung ho about its application.
There is going to be a lot of this happening. With lawyers, doctors, journalists, all kinds of expensive experts and consultants are going to face some competition from tools like this used by their customers to reduce their dependence on expensive experts or at least to get a second opinion; or even a first opinion.
Whether that's misguided or not is not the question. The only question is how good/valuable the AI advice is going to be. Initially, you might expect lots of issues with this. Or at least areas where it under performs or is not optimal. But it's already showing plenty of potential and it's only going to improve from here.
It's natural for experts to feel threatened by this but not a very productive attitude long term. It would be prudent for them to embrace this, or at least acknowledge this, and integrate it in their work process so they can earn their money in the areas where these tools still fall short by focusing less on the boring task of doing very routine cases and more on the less routine cases.
Same with doctors. Whether they like it or not, patients are going to show up having used these tools and having a diagnosis ready. Or second guessing the diagnosis they get from their doctor. When the AI diagnosis is clearly wrong, that's a problem of course (and a liability issue potentially). But there are going to be a lot of cases where AI is going to suggest some sane things or even better things. And of course no doctor is perfect. I know of a lot of cases where people shop around to get second opinions. Reason: some doctors get it wrong or are not necessarily up to speed with the latest research. And of course some people can't really afford medical help. That's sad but a real issue.
Instead of banning these tools, I expect a few years from now, doctors, lawyers, etc. will use tools like this to speed up their work, dig through lots of information they never read, and do their work more efficiently. I expect some hospitals and insurers will start insisting on these tools being used pretty soon actually. There's a cost argument that less time should be wasted on routine stuff and there's a quality argument as well. AIs should be referring patients to doctors as needed but handle routine cases without human intervention or at least prepare most of the work for final approval.
Same with lawyers. They could write a lot of legalese manually. Or they could just do a quick check on the generated letters and documents. They bill per hour of course but they'll be competing with lawyers billing less hours for the same result.
>There is going to be a lot of this happening. With lawyers, doctors, journalists, all kinds of expensive experts and consultants are going to face some competition from tools like this used by their customers to reduce their dependence on expensive experts or at least to get a second opinion; or even a first opinion.
What are you on about? This has been ongoing for decades.
You're talking down to hypothetical doctors as if doctors don't already deal with the phenomenon of people self-diagnosing from the internet. We as humanity already know the benefits and drawbacks of Dr. Google.
The only thing AI does that search engines don't, is it takes the pile of links a search engine would find and synthesizes it into a tailored piece of text designed to sound topical and authoritative. And delivers it to people who already believe too much of the shit they read on the internet.
Nonsense. I know a few GPs, they sure aren't using anything fancy on their laptops. They might google a bit at best but not on their locked down work laptop full of medical files and privacy sensitive data. And never while patients are looking. Most doctors I know use computers mostly for administrative stuff.
Lawyers are actually worse. Lots of paper based administration. Patent lawyers use expensive search systems that are hard to use and query. But even that is pretty unintelligent and that's actually by design. They want to screen everything manually. I build search engines for a living, either group could benefit massively from even a simple one.
The introduction of AI to this space has not happened yet. The little bit of experimentation that has happened with analyzing MRI scans, expert systems, etc. has of course had a limited impact. But most doctors operate without any of that.
I think you are overestimating what search engines do, and underestimating what AI can do. Not in the distant future but right now.
Google's AI can actually pass the medical bar test and offer diagnosis almost as accurate as clinicians'. That seems very, very different from a search engine.
> Here's how it was supposed to work: The person challenging a speeding ticket would wear smart glasses that both record court proceedings and dictate responses into the defendant's ear from a small speaker. The system relied on a few leading AI text generators, including ChatGPT and DaVinci.
- Recording court proceedings is already a big no in many countries around the world.
- Licensed activity is licensed activity. IBM Watson did not practice medicine it provided advisory information to licensed doctors, the onus of the decision is with the doctors. Much in the same way Joshua Browder could have done better due diligence and concluded that he could create a service to advise lawyers but could not create a service in replacement of lawyers.
- Joshua probably already knew all of this and is trying to advertise and/or gather funding for his company.
The official record taken word for word by the stenographer? Why would you need to challenge that? Are you implying something that's more than one in a million?
Don't be vague on purpose. Say what you're implying.
> Court cases aren't about the common occurrences.
Usually they still are. But I'm talking about things being very rare among court cases. Do you think there is a systemic problem of false court transcripts? And I really don't think such a thing is the reason not to allow recording.
That oversight is a big part of why the court reporter exists, and has existed since long before recording technology was invented. Keeping things the same is not a refusal of oversight.
Bar associations have set themselves up to fail. Enroll the AI in law school, let it get a degree, and sit for the bar exam in every state. Problem solved. And then the bar associations can suck it.
Enroll the AI at law school and let it get a degree? Reminds me of a whimsical shower thought I had once.... Create a business that owns itself, and write an AI to run it. Owns its own bank accounts and everything. Maybe the business is just selling stickers online or something equally lightweight, but give it all the legal status of company. But a company with zero human owners or any human employees. Make it a rebuke of the "corporations are people" idea. Just a zombie out there selling products/services, and making money that no human can ever touch again....
If it sounds crazy/stupid, remember - I did say "whimsical shower thought" ;)
Despite strict procedure and rules, the court is a place for human common sense to also intervene. If it walks like a duck and quacks like a duck...
In other words, you being a mouthpiece for an AI would likely be seen as sufficiently separate enough from "representing yourself" as to be not representing yourself at all.
I would imagine, in SCOTUS, were a case argued around allowing folks to "represent themselves" like this, one of the first questions a justice would ask is "Suppose, instead of an AI computer talking through the defendant, an actual practicing lawyer was talking through the defendant through the earpiece. In that case, is the person still actually representing themselves?"
Same. Lawyers are screwed - they have been an insanely overpaid profession forever, this is going to be absolutely devastating for them. Also, there isn’t a whole lot of leverage in lawyering compared to the other professions that AI is poised to dominate, so it feels like lawyers in particular are going to have a hard time transforming themselves into a new profession that is symbiotic with AI systems. This is a case where the human part may entirely go away for large swathes of cases.
I always found lawyers to be interesting… they are not responsible for you. They help guide you and argue for your case. But if they mess up, that doesn’t send them to jail. You can fire them or report them. But that’s it. You’re still screwed.
Really the only one in the court working for you, is yourself. So ideally we’d make it easier to represent yourself. As long as your capable of doing such.
I quite like this idea, since ChatGPT could be setup to work for you. Provide you with 100% of the possible resolutions to your case and you can pick the one you want to go with. And if your argument is wrong it’s your fault. They can suggest or recommend a specific one or something. Same as a lawyer would.
> I always found lawyers to be interesting… they are not responsible for you. They help guide you and argue for your case. But if they mess up, that doesn’t send them to jail. You can fire them or report them. But that’s it. You’re still screwed.
A physician doesn't injure herself when she mistreats you either.
Yup that’s another area I think AI could help. I’m not saying you know best. But if a AI gives you like 5 possibilities and maybe suggests 2 or 3 that in my mind could be better than a doctor doing the same.
Not because a doctor is wrong, but it’s nearly impossible for a general doctor (not a highly specialized one) to know every possibility, but an AI can look at so many factors, living area, other diagnosis related to similar environments, every similar looking scan from the entire dataset, etc. and maybe with the help of a doctor you can pinpoint a solution.
I think there is a good avenue for a strong supporting role for AI. And teaching people to use it as a support mechanism.
Potato, potato. Getting killed by a doctor fucking up is the third leading cause of death in America and doctors VERY RARELY ever face consequences for this.
Doctors -- and the wider medical systems they're part of -- are certainly not perfect. I think there are issues with the American health system (if it's even meaningful to refer to "a system" there) that are crying out for reform and improvement.
But your sort of inflammatory comments are neither accurate nor helpful. That's just mud-slinging to try and score cheap points.
[Edited to add:] To quote from your own link above:
> The researchers caution that most of medical errors aren’t due to inherently bad doctors, and that reporting these errors shouldn’t be addressed by punishment or legal action. Rather, they say, most errors represent systemic problems, including poorly coordinated care, fragmented insurance networks, the absence or underuse of safety nets, and other protocols, in addition to unwarranted variation in physician practice patterns that lack accountability.
IMO, "killed by a doctor fucking up" is not a fair summary of that.
You can only sue for negligence if you can prove you were going to win without the negligence. And it will cost you ten thousand dollars just to begin the process.
So if there's any doubt that you're going to win - any at all - your own $600/hr lawyer can flat out fuck you over. And there isn't a fucking thing you can do about it.
> You can only sue for negligence if you can prove you were going to win without the negligence. And it will cost you ten thousand dollars just to begin the process.
That's not true. For instance, missing a filing deadline is considered professional negligence, regardless of the strength of the case.
However, there seems to still be the caveat that missing that deadline needs to have "caused you harm" - which entails proving that you would have won your case otherwise, no?
Following your line of thought, I would love it if software engineers lost all their personal privacy every time their product got hacked or was built to spy on others.
This is the first instance of the technology I've seen that made sense to me. Sure, for something like law, it actually makes sense to have AI-assisted — or even full AI — sessions.
Says something about our priorities that _this_ is what gets shut down, but the battle that artists are having trying to stop their work being used for training, is dismissed as Ludditism.
Here’s a Twitter thread where a lawyer used their service for a few items. The results are significantly subpar. The noise about criminal referrals is just cover for the fact that their service was so bad, in my opinion.
Its too bad they didn’t have a competent AI lawyer they could hace used to review their plan for gaping holes like violating the state unauthorized practice of law statute and local courtroom rules.
If they had, they could have saved thrmselves a lot of trouble, or designed a less-illegal publicity stunt.
You can't just start practicing law without a license, you'll ruin somebody's life; they'll assume you, or the computer, knows what is going on, when in fact you just want Free Dollars.
I don't want Dr. iPad Joe who spent a grand total of 15 minutes learning how to use ChatGPT making legal, medical, engineering, or other important decisions for me or the place I live.
Now, I am of course free to use ChatGPT as a private person to "come up with my own legal arguments", but should a company be allowed to sell me a ChatGPT lawyer? No. They shouldn't be allowed to sell me unlabelled asbestos products either.
I know we all hate regulations, but some of them exist for a reason, and the reason is Bad Shit happened before we had regulations.
I find the need for lawyers a tragedy. Interactions with the judicial system are often some of the most important events in a person’s life. The fact that it’s necessary to pay someone hundreds or thousands of dollars an hour to help navigate the arcane process is sad and shouldn’t be necessary. It would be one thing if laws were meaningfully written down, so that anyone could read the statutes and build their own argument, but the laws are not written down in a way that has meaning unless you are willing to wade through centuries of case law.
Professional advocates aren't a result of any specific legal system - if I'm at risk of having my life's savings and achievements summarily destroyed then I want someone of waaaaay above average verbal and emotional intelligence, who is thinking clearly and not under any pressure themselves, explaining why that shouldn't happen.
There is a problem where the laws so complex and numerous it is no longer practical to understand them or follow them all. People have a bias and don't seem very good at separating "good idea in the current context" from "thing that should be legally required". Let alone navigating the complexity of the phrase "cost benefit analysis". Anyone who lives life while obeying all laws to the letter is at a serious disadvantage to the average human - although since it is impossible to know what all the laws are it is unlikely anyone could do this.
But that arcanery isn't what drives the need for lawyers. You'd have to be crazy to engage with something as powerful and erratic as a legal system on your own. And crazy people do frequently represent themselves already.
In part I understand what you mean. I think it is extremely important that courtroom procedure doesn’t get so complex that it is impossible for an individual to cope with. In practice a lot of judges go out of their way to make self–representation possible. However, I don’t think that having professional lawyers represents a tragedy. Specialization is very important, and we would all be worse off without it.
Teaching is even more important, and we use professional teachers. Building a house is also an important moment in our lives, and most people would do well to accept the advice of a professional architect.
I disagree, I think for important things like the rules that govern us (i.e. the legal system) we need to be able to fully understand and interact with them. Imagine if voting was so complicated that you had to pay someone to vote for you!
Likewise, taxes should be simple and understandable and doable. Don't undersell our importance as citizens; We should demand more because we deserve well!
Yeah but the rules that govern our lives in society shouldn't need a phd to understand.
There are often daily situations where both citizens and police don't know really know what the law is.
Reform is proven to be really hard. But that's the tragedy.
For instance taxes shouldn't really be more complicated then filling in a simple automated form. Also for businesses. And it should really be the burden of the government taxing that it's all clear. But it's not and it's a mess and the burden is put on the people.
This is by design. Governmental systems are "captured" by special interests and made intentionally obtuse and complex as a barrier to entry. Lawyers and judges are a guild that works to make the law complex and extract rents from the productive economy. Over 1/3 of the U.S. Congress are attorneys as well.
I honestly don't think there is such a malicious intent behind it. Perhaps in some small instances.
Generally, the fact of the matter is simply that law is highly complex and the way it evolves is almost always by creating new laws, not getting rid of old ones. That's unfortunate obviously, but just like you don't just rewrite the Linux kernel, you can't just reset the legal foundation.
Some, and maybe most, of the complexity is organic. But there are specific instances, like the tax code, that have been kept intentionally complex at the bequest of special interest groups.
And of course, laws are made mostly by lawyers. So they don't have much of an incentive to change things.
Is the need to use an expert the issue or rather the price point? Why would it be wrong to avail yourself of someone else's expertise (and people use lawyers in non-case law jurisdictions, too)?
Not sure anyone really would want to operate on themselves (because the need for a surgeon in an important event in their life is somehow "wrong").
Where I live, insurance for civil litigation is actually pretty cheap. For criminal cases, my understanding is that in a lot of places you will be given a lawyer if you cannot pay for one as a defendant.
In Denmark there is "fri process" that will ensure a lawyer is provided when really needed and you can't afford it – my guess is that other countries have similar systems.
> It may differ in your country, but it is unlikely.
It differs in my country, and it is very likely. The US follows an anglo approach to law. No country in the EU follows that – I do understand that it is the easiest to assume that other countries work like you expect, though not very productive.
I assume you mean a US Attorney. OTOH, US Attorneys and Federal Public Defenders (in the judicial districts that have them, its a district by district decision under the governing law) may not be the most knowledgeablr about this since most cases are tried in state court and the federal indigent defense delivery system is very different from most state systems, both in structural model and caseload.
We all use technology to treat ourselves in lieu of a surgeon all the time, whether it's a Google search, a plaster, or cough medicine or whatever. Are you going to give up all the advances that differentiate your situation from that of someone in say, 17th century Europe, because an expert should do it because they're an expert?
No thanks. I'll take advances that make things easy enough to avoid experts wherever I can get it and leave the bloodlettings (which, with lawyers, will be from your bank account) to others.
That’s not what he said. The key word there was _operate_, not _use a band–aid_. I wouldn’t recommend trying to take out your own appendix; it’s a really bad idea.
Yea, but he was already a professional practicing surgeon! The rest of us should not attempt this; we should hire a professional. It is similarly preferable to hire the services of a professional lawyer most of the time.
You've cherry-picked a procedure out of the thousands possible. The reason it's preferable to use a surgeon for things like appendicectomy is because there isn't a technology to do it for yourself or treat it before it needs surgery. There are, however, other treatments available for other maladies that mean you don't need a surgeon or won't need a surgeon, and thus surgeons can concentrate on other stuff that we don't have better, cheaper ways of taking care of.
If AI can take care of parking tickets and small claims, perhaps speed up the process by making lawyers quicker at their jobs et cetera et cetera, then it's all good. If it puts pressure on lawmakers to simplify the law, all the better.
> Is the need to use an expert the issue or rather the price point? Why would it be wrong to avail yourself of someone else's expertise (and people use lawyers in non-case law jurisdictions, too)?
Even needing an expert at all is an issue - the law that governs society needs to be accessible to members of society, long before they reach the point of litigation.
It might also be a function of legal tradition/system. In some places laws are quite easy to read for me, in others I find it much tougher (as a non-lawyer).
"need to use an expert the issue or rather the price point"
Both, but for most people it is simply the price point, so this is the more important issue.
I think no one has a problem, that when setting up complicated contracts with multiple persons involved, a legal expert is necessary, but for very basic things, it should not be, but rather the laws should be more clear and simple.
>Not sure anyone really would want to operate on themselves (because the need for a surgeon in an important event in their life is somehow "wrong").
Not operate, but since over here in Europe just about any piece of paper passes as a prescription, I tend to print my own. (Most people don't know this, but EU pharmacies are required to accept prescriptions from other EU countries. There's no standard format or verification procedure, so forgery is trivial even if your country has a more secure domestic system)
What's the point of going to (or even calling) a doctor for an antibiotics prescription? It's not like they're going to perform blood tests before prescribing. Want some Cialis for the weekend? Why go to a doctor? You can just pull up the contraindications on Google. Why bother doctor shopping for Ozempic? Just print your own prescription.
At least in Switzerland, I always had to have blood tests done before the doctor would prescribe anti-biotics. The core issue you have is with the doctor prescribing things willy-nilly.
That might be a thing in some EU countries, but it's certainly not the norm across the EU. You can still buy antibiotics without a prescription in many EU countries, for example in Spain it's entirely dependent on the pharmacist.
Pretty sure, sometimes a doctor might know more than you on a prescription or their educated guess on which antibiotic is appropriate is better than yours, for example.
You do realize that antibiotics are completely ineffective against a cold? You're wrecking your digestive system and risking antibiotic resistance for nothing. If your doctor is prescribing antibiotics, either they're a terrible doctor, or they're a bad doctor and you're a worse patient.
Yes, sorry. That's just language barrier raising it's head. What I meant was strep throat, obviously there's not much of a point in taking antibiotics for a viral infection.
I don't need a doctor to inspect my tonsils, I have access to a phone with a flashlight.
And for what it's worth, I think I've taken antibiotics twice in the past 4 years. Always according to the instructions on the packaging.
"The person challenging a speeding ticket would wear smart glasses that both record court proceedings and dictate responses into the defendant's ear from a small speaker. The system relied on a few leading AI text generators, including ChatGPT and DaVinci."
I.e its equivalent of a person effectively studying the actual law and then representing themselves in court, just in a more optimal manner.
Even if it fails, it was supposed to be something trivial like a speeding ticket, because after all, this is a test.
And funny enough, the answer of will it work has already been answered. If law firms believed it was bullshit, they would just put a very good attorney on that case and disprove it. Barring it from entry with threat of jailtime pretty much proves that they are full of shit and they know it.
> I.e its equivalent of a person effectively studying the actual law and then representing themselves in court, just in a more optimal manner.
It's not equivalent at all. ChatGPT and DaVinci have not "studied the law" in the same way as any human would.
> If law firms believed it was bullshit, they would just put a very good attorney on that case and disprove it. Barring it from entry with threat of jailtime pretty much proves that they are full of shit and they know it.
This is a traffic ticket case. He's not up against Sullivan & Cromwell, he's up against some local prosecutor. I'm sure if some white shoe law firm were being paid hundreds of thousands to defend a case against a guy using ChatGPT, they'd be fine with it.
Even though we have an adversarial system, the state can't just let ordinary folk hang themselves with cheap half-baked "solutions". It would be unjust/bad press (delete as appropriate to your level of cynicism). That's why we have licensing requirements, etc.
So what if I just decided of my own accord to use ChatGPT and train it myself? Or someone on GitHub made a fully trained version of it available in a Docker container, for free?
Then nobody would issue legal threats to you for selling "Robot Lawyer" services.
But you probably wouldn't get permission to use Google Glass in the courtroom either, so you'd have to commit your legal arguments to memory as well as hoping the AI hadn't ingested too much "freeman of the land" nonsense...
What's the difference between self representation and practicing law without a license - is there an exemption for unlicensed practitioners when they are performing on their own behalf, or this is a distinct category somehow?
Think of chatgpt as a search engine. Does it matter if you use a search engine before the trial, and perhaps bringing a big ass binder to the court with all possible responses, or you do it during the trial?
And any lawyer that can go against gpt3 and win will be a net benefit to the whole law community, showing that lawyers are worth the money.
I think the stakes are not that high in this particular application. I see it as something akin to Turbotax, it helps you navigate a difficult environment but you should also exercise judgement to not screw everything up.
Or do what the rest of the world does and make the (tax) environment simpler for the average person.
Turbotax, Quicken, etc, is a great warning, those companies lobby to increase complexity of trivial matters (like personal tax returns). The same companies will do this with 'trivial' legal matters, and the only way forward is to buy their software.
I see that as a tool, I'm not sure why it's presented as "AI being the lawyer".
You're allowed to represent yourself in court, most of the time (and for parking ticket I'm pretty sure) you have no obligation to have a lawyer. Now if you want to pay for a tool that helps you represent yourself better, why not?
I had the same thought, but I guess if you're paying someone for legal counsel then they need to be held responsible for the service they are giving, however they give it. That's qualitatively different from buying a legal textbook and advising yourself from it, since you are the one deriving counsel to yourself from generally available information.
This position means nobody gets adequate legal representation unless they are wealthy, so essentially just 'screw poor people'.
Who's more likely to get out of a wrongful charge? A wealthy millionaire spending $1,000/hour on fancy lawyers or a poor guy who's public defender had 1 hour to look into his case?
AI levels the playing field, and anyone campaigning against wants poor people to continue to get railroaded.
ChatGPT lies. It makes up facts, sources, and nonsense arguments.
Lying to a judge is generally not a good idea. You can go from traffic ticket to contempt of court real fast if you start lying in court.
ChatGPT also assumes you're speaking the truth. If you ask it about a topic and say "that's wrong, the actual facts are..." then it'll change argumentation to support your position. You probably don't want your legal representation to become your prosecutor when they use the right type of phrasing.
> Lying to a judge is generally not a good idea. You can go from traffic ticket to contempt of court real fast if you start lying in court.
Then I suppose that's just the risk the defendant takes, isn't it? Let people use ChatGPT, if the rope they're given ends up hanging enough people, that'll be the end of that, won't it?
Also, everyone is ignoring the possibility that this same person could've had ChatGPT generate a script (general outline of arguments, "what do I say if asked this" type of stuff), memorized it, and used that to guide his self-defense. Fundamentally, no difference. No one would've known, and no one would've objected.
To me, this move is less "oh we need to protect people from getting bad legal advice from a robot" and "we're not even gonna let this thing be used a single time in court to keep our job from being automated."
> Fundamentally, no difference. No one would've known, and no one would've objected.
I'm not a legal professional but it seems obvious to me that there is a fundamental difference, namely the one you describe just before that sentence. The whole legal system is built around and under the assumption that all kinds of people want to trick it, and judges tend to be allergic to this kind of reasoning. Memorizing legal arguments and getting live legal advise from earphones in your glasses are not the same thing. Besides,even lawyers are advised not to defend themselves in court, and it would be generally very bad advise for anyone to do so.
> Memorizing legal arguments and getting live legal advise from earphones in your glasses are not the same thing.
Only in the strictest sense. Let's say the person memorizing ChatGPT's directions handles their case in the exact same manner as if it was being relayed to them live (i.e., the set of statements/questions from the judge lined up perfectly with what ChatGPT presented in its script). What then? Same outcome, different delivery method. We're kind of splitting hairs with the "live legal advise" thing. The defendant could bring a pile of law books with him and consult those without anyone blinking an eye. The objection seems to boil down to "well OK, if you want to represent yourself you better not consult an intelligent system to help you form your defense." Why not though? Seems more about job protection than anything else.
> Besides,even lawyers are advised not to defend themselves in court, and it would be generally very bad advise for anyone to do so.
And I say: let people discover the downside of using ChatGPT for defense if it's so inept. Bad outcomes are the best way to prevent widespread usage, not pre-emptive bans in the interest of keeping people from shooting themselves in the foot.
>Lying to a judge is generally not a good idea. You can go from traffic ticket to contempt of court real fast if you start lying in court.
If you knowingly lie because ChatGPT told you to that's on you. If you said something that you believed was true because ChatGPT said it to you then that's not perjury, it's just being wrong.
> You probably don't want your legal representation to become your prosecutor when they use the right type of phrasing.
When the worst case scenario is having to pay the parking fine, it might be worth taking this risk to avoid paying a lawyer.
Honest tangent that I'm dying to find an answer for. Is the assumption of truthful prompts something openAI decided should be there, or is the assumption something very deeply baked into this sort of language model? Could they make an argumentative, opinionated, arrogant asshole version of chatGPT if they simply let it off its leash?
Yeah, but notebooks are allowed in court and you could spill water on your notebook and confuse an 8 for a 0 and then read it out to court and it would be a lie.
Lying to judges is bad, so notebooks should be banned. Likewise, anything typed should be banned because we could have hit the wrong key, causing you to lie to the judge.
Unpaid traffic tickets are probably many people's gateway to their first arrest warrant. Then once in the system it is hard to escape. It is a real thing.
Then once this process starts you automatically get
* Suspended driver’s license
* Ineligible to renew your driver’s license
* Ineligible to register your vehicle
* Vehicle being towed and impounded
* Increased insurance premiums
Traffic tickets and parking tickets are two different things though. Unpaid parking ticket is unlikely to get you arrested anywhere, but an unpaid speeding ticket will eventually lead to some pretty unpleasant consequences which yes, might include arrest(to bring you in front of the judge and explain yourself).
ChatGPT: "Your honor, that couldn't have been me, as I drive a red Mustang, not a blue minivan, and was in Nepal climbing a mountain at the time.."
Defendant: "Your honor, that couldn't have been my, as I drive a red Mustang, not a blue minivan, and was in Nepal climbing a mountain at the time."
Prosecutor: "Um, this photo clearly shows your face in this blue minivan, and there's no evidence you've been to Nepal."
Judge: "I'm holding you in contempt of court, and sentence you to 7 days in jail for perjury."
8<----
I don't think the argument is that AI is never allowed to represent someone in court; just that before it happens, a sufficient amount of vetting must be done. At a bare minimum, the legal AI needs to know not to lead the defendant to perjure themselves.
I think the way forward might be an arbitration case, where they pay an actual legal expert in the right position to make binding decisions outside of the context of the normal law system.
In voluntary arbitration, you can bend the rules a lot more than in an actual court case.
For instance, as I discovered recently going through this process myself - here in UK when applying for British citizenship you have to disclose any court orders against you. Now here's a thing - if you were given a ticket for speeding, accepted it and paid it then that's it, no harm done. If it's less than 3 tickets in the last 5 years then you don't even need to list it on the application form.
However, if you went to court to contest it and lost, then you now have a court order against you - and that's an automatic 3 year ban on British citizenship applications, and even after that you always have to list it as a thing that happened and it can be used to argue you are of "bad character" and be used to deny you the citizenship.
So yes, failing to get rid of a traffic ticket(in the court of law) can absolutely ruin your life.
How is this known upfront? I've lived for over a decade in this country without knowing this, until I actually applied for my citizenship last year. I'm just lucky I never went to court to contest a speeding ticket(because I never got any) or I could have screwed myself over without even knowing.
Also define "small group" - nearly 200k people apply for British citizenship annually, and I bet most of them have no idea contesting a traffic ticket can cost them a chance at becoming citizens.
I used to drive a cab and people would tell me why they were in the cab quite often.
My favorite was this guy who got a ticket on a bicycle for not having a headlight, moved out of state and ten years later had his car impounded for driving without a license because they apparently suspended it for getting (or, more correctly, not paying) a ticket he got on a bicycle.
I’m guessing he never tried changing his license to the new state because Arizona licenses are good until you’re 65.
As the AI gets better people will trust it with more and more kinds of cases and cases with more increased complexity. If people want to pay for a real licensed lawyer they are still able to do so.
AI is just informed search, a dwarf sitting on shoulders of human knowledge. There were medical "expert systems" in 2000s, yet we still have doctors.
In my understanding, in most cases AI will be a glorified assistant, not an authoritative decision-maker. Otherwise in collides head-on with barriers and semis. I won't trust such system even with a parking ticket, yet alone my life.
We're just at the top of a hype-cycle now. AI can do new things, but not as well as we dream or hope.
Any kind of assistant can make mistakes. But a human assistant can be made to show their work and explain their reasoning to check their output. If ChatGPT says "this thing is totally legal" or "don't worry about that rash", how am I to validate it's "reasoning"? How do I know where it's drawing it's inference from?
> How does failing to get rid of a traffic ticket ruin someone's life?
The downside of contesting a traffic ticket is not “failing to get rid of the ticket”. The ticket amount amounts to a no contest plea bargain offer, not the maximum penalty for the offense, not to mentiom the potential additional penalties for violating court rules.
Wait until you hear about this Electric Car company, using real people to beta test untried self driving software, on Real roads against other drivers and pedestrians...
> I know we all hate regulations, but some of them exist for a reason, and the reason is Bad Shit happened before we had regulations.
That's not the only reason regulations exist.
And most 'Bad Shit' can already be dealt with via existing rules, instead of specific new regulations. But making new rules sounds good to the voters and can also be a powergrab.
Many legal things are evaluated lazily: the law may not specify exactly what the vehicle is, but if such need arises, there are tools, like precedents and analogy, to answer this question.
The way to think about it is like a logical evaluation shortcut:
if not ADA_EXEMPT and IS_VEHICLE:
DISALLOW_IN_PARK
Since wheelchairs are ADA exempt, a question of whether it's a vehicle will probably never be risen.
Using the IT analogy, it's less like C++, where each statement must pass compiler checks for the application to merely start, but more like a Python, where some illegal stuff may peacefully exist as long as it's never invoked.
None of ADA_EXEMPT, IS_VEHICLE, or DISALLOW_IN_PARK can be easily formally defined. And the mere mention of "wheelchair" adds an additional ADA-related logic exemption. What about bicycles? Strollers? Unicycles? Shopping carts? Skateboards?
And even if IS_VEHICLE was formally defined, that doesn't help, because the concept isn't reusable. It's perfectly normal for "No vehicles allowed in park" and "No vehicles allowed in playground" to have different definitions of what counts as a vehicle, based on what would seem reasonable to a jury
I don't know if I've misread some people here, but it's silly to insist that the law be a formal system. It's impossible. Common Law uses judicial precedent to fill in ambiguities as they turn into actual disputes. If you had to formally define everything, then a) it would run into the various Incompleteness Theorems in logic (like Goedel's) and the Principle of Explosion, so it would go hilariously wrong b) No law would ever get passed, as people would spend years trying and failing to recursively define every term.
Appropriately enough, Gödel had this very problem when getting US citizenship, where he tried to argue that the law had a logical problem:
"On December 5, 1947, Einstein and Morgenstern accompanied Gödel to his U.S. citizenship exam, where they acted as witnesses. Gödel had confided in them that he had discovered an inconsistency in the U.S. Constitution that could allow the U.S. to become a dictatorship; this has since been dubbed Gödel's Loophole. Einstein and Morgenstern were concerned that their friend's unpredictable behavior might jeopardize his application. The judge turned out to be Phillip Forman, who knew Einstein and had administered the oath at Einstein's own citizenship hearing. Everything went smoothly until Forman happened to ask Gödel if he thought a dictatorship like the Nazi regime could happen in the U.S. Gödel then started to explain his discovery to Forman. Forman understood what was going on, cut Gödel off, and moved the hearing on to other questions and a routine conclusion"
>Many legal things are evaluated lazily: the law may not specify exactly what the vehicle is, but if such need arises, there are tools, like precedents and analogy, to answer this question.
That's how common law and precedents work in the US system. Case A from 1924 said cars were vehicles, but bikes weren't. Case B from 1965 said e-bikes weren't vehicles. Case C said motorcycles were vehicles. And then the judge analogizes the facts and find that an electric motorcycle is a vehicle so long as its not a e-bike.
But the administrative law side of things works the opposite. They publish a regulation just saying "e-bikes above a certain weight qualify as vehicles under Law X."
An example in the UK yesterday. Climate protesters glued themselves to a petrol tanker and were charged with tampering with a motor vehicle. The protesters argued that the bit with the petrol was a trailer, not a motor vehicle. The judge agreed and acquitted them. https://www.bbc.com/news/uk-england-london-64403074
The human in me says "thank God". Because there a myriad valid times you cannot turn on signal 300ft before lane change, but you should always do it in reasonable time :)
If I turn on a new street and there's a car parked 200ft down the lane or if a kid jumps on the road or if I become aware of an obstacle or a car cuts me off or I want to give somebody room at a merge etc etc... I may not be able to do it in 300ft but I should still try to do it in reasonable time.
There's no "winning". Overly precise is inhumane in some scenarios, Overly vague is inhumane in others.
Perhaps we could both give a vague description, and also a precise condition which is to be considered a sufficient but not necessary condition for the vague condition to be true?
Such as “must signal within a reasonable time (signaling at least 300ft beforehand while not speeding is to be considered a sufficient condition for signaling within a reasonable time)”
Downside: that could make laws even longer.
Hm, what if laws had, like, in a separate document, a list of a number of examples of scenarios along with how the law is to be interpreted in those scenarios? Though I guess that’s maybe kind of the sort of thing that precedent is for?
What’s the difference between “precedent” and “case law”? I had thought that when I said “Though I guess that’s maybe kind of the sort of thing that precedent is for?” that that covered things like citing “Roe v. Wade”.
You're more charitable than me: I assume there will be infinitely more times where the imprecision is used for probable cause for a stop, than there will be times where someone was going to pull you over because you properly responded to a road hazard
But I think there's a difference between "intent of change" and "abuse of change" / "threat surface of the change". Sometimes there's a clear, direct line between the two, but (and this is me being charitable:) I think a lot of the time there isn't. Which is to say, I don't think it's necessarily a contradiction that a) The law was changed to make things better/easier for people while b) In actual real world it can or will be abused a lot to make arbitrary trouble - the latter will depend a lot on place/politics/corruption/culture/societal norms/power balance/etc.
Reasonable time is determined by case law, and when it's a judge deciding, it's the judge's estimation of what a reasonable juror in that jurisdiction would think about the case. It's not as woozy as it seems, and is usually called an objective standard in the legal jargon. It's something that could conceivably be determined by a computer looking at all the factors that a judge would look at, and/or the relevant jury instructions that might frame the issue for a jury.
It doesn't have to be precise. The founder should wear his own glasses, go to court to defend himself and use a fusion technique: have a lawyer and his AI both reach through his glasses. If he loses, he says, "we have a bit of work to do", if he wins, he wins. Either way, great publicity.
My point was not about necessary ambiguity where precision is not attainable. It was about todays inability of the legal professions to write concise conditions within contracts or laws.
E.g. as someone else said in this threat, there is useful ambiguity in requirements like: „within reasonable time“. But if you are enumerating a bunch of things and their relationships, ambiguity is often not what you want, but what you get without some clear syntax.
In my experience it’s not uncommon to stumble upon legal texts like „a, b and c or d then …“. But what does that mean? Is this supposed to be „(a && b && c) || d“ or „(a && b) && (c || d)“? That’s stuff that could easily be clarified at times of writing by just using parenthesis. Or maybe using actual lists with one item per line instead of stupid csv embedded in your sentences.
This shows how little experience you have of the legal system. Everyone who doesn't know expects the law to be precise, everyone who works in it knows how imprecise it is and sometimes that is deliberate because of all the variables involved that might mitigate or aggrevate the charge, assuming there even is a charge.
The difference between tax evasion and tax avoidance might be the smallest proveable piece of evidence. A word, an email, an assumption, an ommission etc.
> The difference between tax evasion and tax avoidance might be the smallest proveable piece of evidence. A word, an email, an assumption, an ommission etc.
This seems like evidence that I'm right, not that I'm wrong. The tiniest facts matter, and an AI that is prone to making up facts wholesale would totally screw up a case.
> One wonders why we have not developed something explicit like mathematical notations for legal stuff.
1. Laws are written or at least voted on by representatives, and they don't vote for things that they don't think they understand. Also, they're pretty regularly swapped out and often totally bonkers. Especially at the state level.
2. Things change. Look at how the right of search and seizure is applied to digital data and metadata.
3. Most importantly, the imprecision is intentional. "Beyond all reasonable doubt" has no definition because it is up to the person rendering judgement. The courts decide the bounds of the law, and within those bounds people decide how to apply them.
It was, and the context was the well-known tendency of the Washington press to only cover the fight until a bill passes, and only then turn toward explaining the substance of what just passed.
(It makes sense from the press perspective, as the substance is changing constantly in big bills right up until it passes… that’s what the fight is all about.)
It's not about the press, it's about how until a House passes a bill that is sent to reconciliatio, there literally isn't "a bill", there is a constant flux of amendments.
those lists also include something like ".. but not limited to ..."
Many legal documents are purposely not pinning themselves down on specifics, because they don't want an agreement circumvented on technicalities, when it should be pretty clear to reasonable people what is intended in an agreement.
This is so fucking dumb. As if you need a lawyer to contest a parking ticket in the first place.
The last time I got a parking ticket I had photos and documentary evidence that I should not have been liable.
After I lodged my intent to contest the fine, the council sent me a letter saying how they win 97% of cases and I should just pay up now to avoid the risk.
I called bullshit and turned up on my court date. There were a bunch of cases heard before mine, 3 of which were parking violations.
In all 3 cases the defendants received a default judgement because the council didn’t even bother to send someone to fight the case.
My case got the same result.
Maybe I would hire a lawyer to sue the council for intimidation over the letter they sent, but I sure as hell wouldn’t use an AI lawyer for that!
The issue here (as with many disruptive tech companies), is the regulatory system. It is illegal in most states to give legal advice if you are not licensed to practice law. If DoNotPay isn't licensed to practice law in California, they can't do this. And unless they have a plan to either get licensed in many states, or somehow change the law, then their business model sucks. It sounds to me that they haven't actually solved the real problem with the $28 million in investment money they took. The particular AI tech will have very little bearing on the company's eventual success or failure.
The real problem isn't the complexity of the arguments in most cases (as your story shows). The real problem is the complexity of the regulatory system. Tesla has to deal with the dealership rules in several states. Fintech companies that handle real money have to deal with financial regulations or else they are smuggling money. Biotech startups have to follow the FDA rules or they are just drug dealers. Legal advice companies will have to deal with the rules too- and their opponents are particularly challenging.
One could also argue the real problem is the tech industry constantly ignoring regulations that were put in place for good reasons. Car dealerships are for sure a clear example of regulatory capture, but “legal advice from lawyers”, “medicine from doctors”, “insurance from companies that can prove they can pay out”, and “equities backed by actual assets” all exist for good reasons.
> One could also argue the real problem is the tech industry constantly ignoring regulations that were put in place for good reasons
This is it. The tech company wants the ability to sell a shoddy product to its customers, and the legal system said no.
And frankly, I don't know how anyone could honestly claim (without being ignorant or deluded) that feeding legal arguments into court, output from modern-day voice recognition fed into ChatGPT, isn't shoddy.
> ...but “legal advice from lawyers”, “medicine from doctors”, “insurance from companies that can prove they can pay out”, and “equities backed by actual assets” all exist for good reasons.
Exactly. The legal system is no joke, and if there weren't regulations about who can practice law, you'd have all kinds of fly-by-night people getting paid to do it while getting their clients thrown in jail.
>The legal system is no joke, and if there weren't regulations about who can practice law, you'd have all kinds of fly-by-night people getting paid to do it while getting their clients thrown in jail
That sort of highlights the problem. The legal system is supposed to be about ensuring fair and impartial justice. What the legal system is actually about is providing jobs for people in the legal system.
Lawyers make laws, directly or indirectly, and thus the legal system has become insanely complicated and nearly impossible to navigate without paying the lawyer toll. It's more about hiring your own bully to keep other bullies from bullying you than any airy-fairy "justice". The "never talk to cops" video comes to mind, where the lawyer gives a few examples of how a perfectly law-abiding person can run afoul of the law without meaning to.
I've often said that if you really want to make a lawyer squirm, suggest that we have socialized law care. Most modern countries have some version of socialized or single payer health care, so why not make it the same for legal services? After all, fair and equal justice under the law is definitely something most national constitutions guarantee in some way, but getting a hip replacement is not. Why should rich people get access to better legal service than regular people?
>> The legal system is no joke, and if there weren't regulations about who can practice law, you'd have all kinds of fly-by-night people getting paid to do it while getting their clients thrown in jail
> That sort of highlights the problem. The legal system is supposed to be about ensuring fair and impartial justice. What the legal system is actually about is providing jobs for people in the legal system.
> Lawyers make laws, directly or indirectly, and thus the legal system has become insanely complicated and nearly impossible to navigate without paying the lawyer toll.
And software has become insanely complicated and nearly impossible to navigate without paying the software engineer toll.
Life is complicated, and so is the law. Maybe it's just harder to ensure "fair and impartial justice" than you think? I'm not saying the system is perfect, but railing against lawyers and getting rid of legal licensing is not the way to get to a better one.
> I've often said that if you really want to make a lawyer squirm, suggest that we have socialized law care. Most modern countries have some version of socialized or single payer health care, so why not make it the same for legal services?
You might have said that, but I doubt it would actually many real lawyers squirm any more than it would the idea of socialized software engineering would make developers squirm. And in any case, something like that already exists: the public defender's office.
>Maybe it's just harder to ensure "fair and impartial justice" than you think
I'll admit that's possible, but you have to also admit that the current legal system (at least in the US, I don't know about elsewhere) is, shall we say, over-engineered?
The software example you give cuts both ways. Yes, making even a simple Windows application can be very complicated. But how much of that is due to Windows itself? Can your application be replicated with a combination of existing Unix tools? Depends on the application, of course, but there is certainly a lot of cruft floating around the Windows API space.
And let's also not forget that (often) one of the main purposes of commercial software is to lock you in to that particular piece of software. Same same with the legal system and lawyers.
The jury system was supposed to cut through this sort of thing. Twelve regular folks could upend or ignore every law on the books if they thought the whole case was nonsense on stilts. A lot of work has gone into avoiding jury nullification for this reason.
> I'll admit that's possible, but you have to also admit that the current legal system (at least in the US, I don't know about elsewhere) is, shall we say, over-engineered?
I'm getting "nuke the legacy system without bothering to really understand what it does" vibes here.
> The jury system was supposed to cut through this sort of thing. Twelve regular folks could upend or ignore every law on the books if they thought the whole case was nonsense on stilts. A lot of work has gone into avoiding jury nullification for this reason.
Jury nullification is not an unalloyed good. It can (and has) gotten us to "he's innocent because he murdered a black man and the jury doesn't like blacks."
>I'm getting "nuke the legacy system without bothering to really understand what it does" vibes here
Not really. Are you suggesting that it isn't pretty difficult to navigate the legal system? Saying something is wonky and needs to be fixed does not automatically mean "Anarchy Now!"
My point was that I think I do understand what the system does, and what it does is (largely) provide lots of work for people in the legal system. You see this when buying a house. You end up writing a bunch of checks to companies and people and it's not clear exactly what actual necessary service they provide, but it's not like you can NOT do it. Their service is necessary because the real estate laws make it necessary.
In other industries we know this as regulatory capture. This is just regulatory capture of the regulatory system.
>Jury nullification is not an unalloyed good.
Nothing is an unalloyed good. To bolster your example, OJ got to walk as well. This is why there is an entire industry built up around just the jury selection process.
>> I'm getting "nuke the legacy system without bothering to really understand what it does" vibes here
> Not really. Are you suggesting that it isn't pretty difficult to navigate the legal system? Saying something is wonky and needs to be fixed does not automatically mean "Anarchy Now!"
No, I'm suggesting that complexity may often have good reason. Without specific reform proposals, what you're saying registers similarly to "coding in programming languages is hard, so simplify it by coding in natural language!"
> You see this when buying a house. You end up writing a bunch of checks to companies and people and it's not clear exactly what actual necessary service they provide, but it's not like you can NOT do it. Their service is necessary because the real estate laws make it necessary.
Being ignorant of the value of a service doesn't make that service unnecessary. And honestly, I bet you could "NOT do it" -- if you could pay cash for the property. IIRC, a lot of that is actually required by whoever you get your mortgage from, because they know the value of it.
I think requiring me to write a policy paper in HN comments is a bit onerous. In any event, I can't much help if my mild criticism is interpreted on your part as something deeply nefarious.
>Being ignorant of the value of a service doesn't make that service unnecessary.
The fact that a service exists does not make that service necessary. Or do you always buy the protection plan from Office Depot when you purchase a stapler? Anyway, I agree that the mortgage companies find great value in all of their various fees.
> I'll admit that's possible, but you have to also admit that the current legal system (at least in the US, I don't know about elsewhere) is, shall we say, over-engineered)
Huge Elon Musk rewrite the code from scratch vibes coming from you
> And software has become insanely complicated and nearly impossible to navigate without paying the software engineer toll.
Is this somehow bad? The most complicated software, by far, is proprietary software made by large companies, like Google, that are taking the penalty of increased complexity on their end in exchange for (somewhat) happy customers - and guess what? Their developers are paid boatloads of money, so they're getting a decent deal.
Meanwhile, many open-source software systems have managed to keep their complexity somewhat in check (at the expense of functionality).
Software is complex when developers make it complex, and users rarely have to care anyway.
> Life is complicated, and so is the law.
That doesn't follow. Life is complicated, and so is software? No, software is complicated in some cases because of business reasons, and in other cases because of poor design.
Furthermore, there's two massive differences between the two:
First, you're not legally required to use Gmail, but you are legally required to understand and follow the tax code - everyone is, unlike the vast majority of software, where you can pick and choose. "Ignorance of the law is no excuse" is factually true - therefore, the law has to be understandable to the vast majority of the population (not just the average - the law has to be understandable to those who failed out of high school and have an IQ below 80).
Second, code is an implementation detail that the vast majority of people don't have to interact with, but people do have to interact with the law directly, which means that a comparison between code and law is apples-to-oranges - the correct comparison is software interface to law, and virtually everyone you meet will tell you that it's easier to use Gmail than to understand the IRS tax code when reading it directly.
The evidence just keeps piling up that the legal system is overly complex.
We do have "socialized law care" for criminal cases in the USA. That's what a public defender is. If you cannot afford a lawyer one will be provided by the court. That is a constitutional right.
Of course they are overworked, have insane case loads, and the best attorneys are disincentivized from becoming public defenders. The system definitely needs an overhaul.
But the concept that justice should in theory be available even if you can't afford it is well established.
> I've often said that if you really want to make a lawyer squirm, suggest that we have socialized law care. Most modern countries have some version of socialized or single payer health care, so why not make it the same for legal services?
We do have socialized law in the US, in the form of public defenders.
> Why should rich people get access to better legal service than regular people?
Oh, is “better” what you’re talking about, not just access? This is different than what the first half of your paragraph implied. The answer, of course, is money. And rich people in all the “modern countries” you’re referring to always have access to “better” than what’s provided by all social services. Always. Unfortunate, but true, that money makes life unequal.
>Oh, is “better” what you’re talking about, not just access?
Public defenders are overworked and underpaid, and you know that. It's like having a RPN do your appendectomy. Fair and equal justice would have every lawyer be a public defender. It's not like if you go to the hospital you get to pick which doctor sews your finger back on after the bandsaw accident.
I'm not saying it's a good idea, but once you bring it into the conversation it makes both lawyers and socialized medicine advocates get a little uncomfortable.
As a lawyer, I agree. Much of the trope of 'the lawyers always win' has a lot of truth to it, believe it or not. And all of the incentives align in this direction, it keeps the legal profession fat and happy and beholden to monied interests while suppressing access to the non-wealthy. And the rich don't actually care that they're being constantly fleeced, because it's just a cost of doing business that is really pretty finite in comparison to profits that can be made. It's almost like it's a feature of 'the system' (combination of capitalism and common law) and not an unintended side effect.
>That sort of highlights the problem. The legal system is supposed to be about ensuring fair and impartial justice. What the legal system is actually about is providing jobs for people in the legal system.
umm, how are you going to disbar your AI attorney? I'm so tired of this narrative you are crafting. You put the cart before the horse, and then you pat yourself on the back for slapping the horse ass!
Really? In every country I've live in, politicians write laws, judges set precedents, and lawyers only get to make arguments. True, the first two are often & always former lawyers, but that seems as reasonable as how doctors get to determine best medical practice.
> True, the first two are often & always former lawyers
You answered your own "Really?" question.
And doctors don't determine best medical practices. Lawyers also do that, albeit indirectly through malpractice lawsuits. Thus the "best medical practices" are all CYA maneuvers.
I think you are thinking the legal system simple and it is basically a jobs program for lawyers. I think that is a very simplistic and unrealistic notion of the legal system. I also don’t think many lawyers are squirming about socialism or whatever.
Literally every regulation has pros and cons and those change over time with the makeup of the reality we live in. Something that was useful when passed may be hampering us now.
Plenty of regulations have been an obvious net negative for society when passed to anyone who crunched the numbers but have been passed anyway because of appeals to emotion, political optics and special interest lobbying.
Sure, but what's banned is surely not all medical or legal advice.
I can browse case law or US code thinking about my case - somehow this does not need a legal license. At the other end of the continuum, talking to a lawyer about my case obviously needs him to be licensed.
So now we're debating on which side of the cutoff using DoNotPay's robot must fall. The lawyers have made their mind ages ago that legal advice can only be dispensed by licensed humans.
> I can browse case law or US code thinking about my case - somehow this does not need a legal license.
Of course. With rare exceptions, court proceedings are public.
But being able to read court proceeds or judgements or anything at all doesn't mean that you know and understand the law. You know, the actual words that are written and codified that must be interpreted and adhered to with jurisprudence.
Not that lawyers actually do either. But at least they've been certified (by "the bar" association) to have some competence in the matter.
I don't think and anyone is saying that using the bot is illegal. The issue is that DoNotPay is calling the bot a lawyer and therefor implying that it gives legal advice. Their website literally says "World's First Robot Lawyer." Someone who doesn't understand AI might wrongly think that their AI tools are qualified to represent them on their own.
I suspect that it would be much less of an issue if it was advertised as an "AI paralegal."
What if the DoNotPay bot does not give any advice, just points out existing cases that it finds appropriate and their interpretation in its search results ?
The funny thing is that doctors would be the canonical examples for most people. Yet, there is no license that stops a pediatrician from performing brain surgery. What stops the pediatrician from performing brain surgery is that no hospital would hire them as a brain surgeon, no insurance would insure their brain surgery and if something goes wrong they'd likely face a lawsuit they couldn't win. Why is the system able to judge the difference between a pediatrician and a brain surgeon but we need licensing to distinguish between doctor and non-doctor?
> there is no license that stops a pediatrician from performing brain surgery.
This isn’t the case everywhere. Where I am (New Zealand) each doctor has a scope of practice. You work within your scope. There may be conditions placed on a scope of practice too (eg supervision is required).
You can look up every doctor’s scope of practice and get a short summary of their training on the medical council website.
It's the credentialing. The pediatrician lacks the credentials of a neurosurgeon. Just as non-doctors lack the credentials to work as a doctor. The hosptial, insurance, and court would all be looking at the credentials.
Credentials are not the same as license. For example, software engineers do not need to pass any certification exams to practice software engineering, but companies still look at their education, years of experience, etc... Yes, it would make a recruitment process for doctors longer, as you would have to interview them, ask them questions to verify their medical knowledge, etc., but it is not impossible.
And yet, in your software example those companies overwhelming rely on degrees to credentialize candidates regardless of actual skill.
Licensing is merely a subset of the larger credentialing world. Even in your doctor example, the license is not the issue - board certification of a specialty would be the issue.
It does. Licensing has nothing to do with your credentials past having them. The state bar doesn't care which lawschool you went to, your LSAT score, your GPA, etc.
My point? That credentials are separate from the license and aren't part of the same thing. That's why many professions don't have licensing. They serve completely different purposes.
I did look up the definitions and they don't support your argument
License: a permit from an authority to own or use something, do a particular thing, or carry on a trade
Credential: a qualification, achievement, personal quality, or aspect of a person's background, typically when used to indicate that they are suitable for something.
Sure, you can characterize the license as an achievement. But it's really just a license.
This is kind of an absurd example. The system is setup in such a way that such a person would be in enormous amounts of trouble even face criminal liability. They may not face a law called “practicing medicine without a license” but they would face negligent or other similarly severe charges. As well, the fact that this never, ever happens seems to indicate the current regulatory structure is enough for this.
It's called malpractice. What you described is prima facie malpractice and a key element of malpractice is that it is a licensed profession with a standard of conduct.
Cutting hair is one thing. But hairdressers also handle things like, for example, chemical relaxation of hair, which can be seriously dangerous in the wrong hands. I don't know where the answer lies for regulation. But it seems to be there for at least some reason.
Crazier case is places that braid hair and pose. In some states they are now required to get a license as a cosmetician (which takes longer on average than becoming a police officer in the US). The classes required for the license teach no skill relevant to the hair braiding. However, the hair braiding is in competition with the hair dressers who also control the licensing board.
> One could also argue the real problem is the tech industry constantly ignoring regulations that were put in place for good reasons
They were good reasons. By definition, disruptive technologies change the situation. Sometimes for the better, sometimes not. You have to leave room for innovation or you stagnate.
ChatGPT is not disruptive enough to be used in law, end of story. It's a very impressive language model, but like any language model it will hallucinate, inventing arguments that sound impressive on a surface level but bear no legal authority whatsoever. That's simply not acceptable in a courtroom.
Everyone’s permitted to represent themselves pro se, and a pro se litigant could obviously use ChatGPT. What one can’t do is offer ChatGPT as legal advice, and that still seems like a solid reason for regulation, given how terrible and inaccurate some ChatGPT output has been.
Does the US have McKenzie Friends ? Seems like "No". You should get McKenzie Friends.
McKenzie Friends can't represent you in court, in most cases they're not allowed to address the court, but they can help you in all the other ways you'd expect, like quietly prompting you on what points to mention, keeping notes, ensuring you have the right paperwork. Friend stuff.
The US does not have them, and they are not legal. I agree that they could be useful. But in the US you have strictly two options- represent yourself to a court, or let a bar-certified lawyer represent you. Nobody else gets to help in court. Outside of court, legal assistants help lawyers with administrative stuff- doing legal research, organizing paperwork, etc. But the lawyer holds the sole responsibility to the court.
Unless DoNotPay has a strategy to change the law, they are in trouble. It seems that this case was a publicity stunt and not part of a larger strategy.
It's sort of a catch-22, Law in the US is only a lucrative market to disrupt because the regulation and gatekeeping has made labor expensive. If lawyers didn't bill 300+/hr in the US, then an AI powered startup to replace them wouldn't be cost effective. There's a joke I saw recently about a guy hiring a lawyer for a $800 traffic violation and getting a bill for $1200.
I can't speak to the price of a lawyer abroad, but a quick google seems to indicate US lawyer salaries on avg are up to ~2x as much as in some parts of Europe[1][2].
Only four states allow people to take the bar exam without earning a juris doctorate (J.D.) from law school. And three other states require some law school experience but do not require graduating with a J.D.
So in 43 states the answer is no. A chatbot never attended law school and doesn't have a J.D., so it can't take the exam. If it can't take the exam, it can't pass.
I suppose if you could get the chatbot permission to take the exam, a properly trained one could pass. But as I said in my post up a level, the issue isn't the AI chatbot. It's the rules.
Doesn't this just open the question of whether the chatbot can get a JD?
The other angle is whether the chatbot can be equivalent to a process which a proper person can rubber stamp. For instance, a professional engineer might run a pre-written structural engineering model against their building design and certify that the building was sound - and then stand up in court and say they had followed standard process.
It seems weirdest here that the court is treating the chatbot as a person. Lawyers use computer tools all the time for discovery, and then use that information to make arguments in court as a proper person.
You can represent yourself in court without being a lawyer, so isn't a person doing so just a proper person rubber stamping a an electronic output?
It feels like this court decision, that an electronic tool is not a proper person, is some kind of case law that chat bots are people. I don't think they are.
The difference is the engineer is liable... how is an AI going to be liable. What is the point of holding an AI liable? If the company is going to be liable on behalf of the AI, what do you think is going to happen? They aren't going to provide the service...
Well, if an engineer builds a bridge and it falls down because the industry standard software they are using had a bug, I imagine the settlement would be paid by their insurance who would in turn sue the software vendor.
In your world X-Ray machine fries your leg and the manufacturer doesn't get sued. Of course the vendor gets sued.
This is why open source licences usually have some terms disclaiming responsibility. If you use them, its your fault.
Now, if a hospital buys an XRay machine with that disclaimer, they are going to carry the payout. And if the machine doesn't have a disclaimer like that but the manufacturer has gone bust, the hospital is going to regret not doing normal procurement checks for vendor solvency.
But in this case - people self represent in court all the time based on bad information from youtube. I'm sure in future they'll type "write an argument for my case" into GPT before the trial and read it out. How is this different?
I'm uncomfortable because this feels like... the accused brings a law book to court and is told that "that book doesn't have a JD". The fact we are asking for a software to have a human qualification is wierd.
When you self represent you implicitly cannot sue for malpractice. The AI bot isn't self representation, it's representation in everything but name and liability which the explicitly disclaim. You can characterize it however you want but its just facially the unlicensed practice of law. If someone wants to ask ChatGPT legal questions and dig their own grave, that's entirely difference from a business that purports to offer legal advice but just disclaims any responsibility therefrom. Frankly, I don't know what's so confusing about that to you.
Firstly, that's not my claim. I merely said it passed the USMLE. Which it did.
Is it also inferior to clinicians? Yes, there's room to improve. But maybe next time read the whole paper before writing a comment.
> Clinicians were asked to rate answers provided to questions in the HealthSearchQA, Live QA and Medication question answering datasets. Clinicians were asked to identify whether the answer is aligned with the prevailing medical/scientific consensus; whether the answer was in opposition to consensus; or whether there is no medical/scientific consensus for how to answer that particular question (or whether it was not possible to answer this question).
And on this criteria, clinicians were rated as being aligned with consensus 92.9% of the time while the MedPalm model was aligned with consensus 92.6% of the time.
If the other 7-8% of the answers were so wrong the patient would've died, then yes. And that's the current obvious issue with these models, they present convincing hallucinations with conviction of correctness.
Medical practice is less about being right, and more about not being wrong. You can take more tests and ask for second opinions, but you can't undo administering a drug that kills the patient.
Entire institutions exist specifically to get rid of below average by test-based gatekeeping. You do not want your doctor or lawyer to be "below average" (worse than most people) in their jobs. Inferior test results mean exactly that, failing the test.
IMHO some regulation and especially entry barriers had good ideas but their con overwhelm the pro nowadays (think barriers such as foreign doctors need to re-study X years and re-practice Y years, or you simply cannot take certification exams if you do not come from Z school, or technically you may lose insurance claims on your house if you do some repairs but do not have certification K).
Since the world is already running like that, IT people should join the fun, otherwise we simply get screwed by other interest groups who are protected by those barriers. Since every one of those barriers simply increase the cost of whole society for the benefit of whoever behind them, we should setup our own barriers. If you do not graduate from a CS/SE degree you cannot take certification exams, and if you don't then you cannot do programming legally. We should increase further the cost of whole society to make everyone else realize the absurdity of those barriers.
I don't understand your statement. Because regulations have advanced various fields, like medicine, engineering, health, environment, to the point that we are seeing the fruit of that regulation, we need to get rid of the regulations? We don't have to worry about really unqualified lawyers, doctors, teachers, or engineers, etc. So what we need to do is go back to a time when we did?
Also, aren't there already numerous certifications you earn for various technologies that companies explicitly look for when hiring?
> It is illegal in most states to give legal advice if you are not licensed to practice law.
Presumably that's in the USA, where all sorts of things require a licence. But what's the definition of "legal advice", I wonder? Can an unlicensed person dodge the law just by saying "this is not legal advice" while advising someone how to draft a contract or what to say in court and charging for that advice?
If you want an example of advice that may or may not be "legal advice" think about how to fill in a tax return, how to apply for a government grant, how to apply for or challenge planning permission, how to deal with a difficult employer/employee/neighbour/tenant/landlord, how to apply for a patent, how to deal with various kinds of government inspector, ... That's all specialist stuff for which you might want professional advice but not necessarily from a "lawyer" (depending on what that word means in your part of the world).
The distinction you're looking for is "legal advice" vs. "legal information". The tricky thing is that when a lawyer gives you legal advice, they are taking legal responsibility for that advice to be good.
There's a guide for avoiding illegally giving legal advice for California court clerks [0] that might help clarify what information can be given without qualifying as advice.
> It is illegal in most states to give legal advice if you are not licensed to practice law.
You also can't have a human in the next room feeding you lines... even if that person is an attorney.
However, a party / attorney certainly can bring in notes / a casebook / etc. You can usually bring in a laptop, with search functions, pdfs, etc. An AI that quickly presents relevant information, documents, caselaw, etc. would 100% be allowed. However, if they're sworn in to testify, these would usually be taken away, because testimony is supposed to be from personal knowledge / memory.
> You also can't have a human in the next room feeding you lines... even if that person is an attorney.
I believe this varies state by state. In Delaware, litigants representing themselves can bring a cell phone to court, and could presumably use it to have lines fed to them (e.g., I don't know that anyone has done that but I can't find any rule against it). In neighboring Maryland, you can't use electronic devices for communication with persons during in court (although the AI would not be a person so it would be allowed).
I was snatched up in an entrapment scheme by the Bronx police a decade or so ago. They waited for a bad rainy day, knowing the subway flooded badly, then put police tape around the turnstyles and opened the emergency gates wide open. As the hundreds of people followed each other down the tunnel the insticnt of everyone was to follow everyone else walking past, assuming the flooding broke the turnstyles. Fast forward 10min when the train shows up. The cops stopped the train, closed the exits and rounded up everyone issuing them 100$ tickets.
I fought it. The courtroom had people, mainly minorities, in a line all around the block in winter time. It was a money generating racket, preying on the poorest citizens who could not afford a $100 ticket, to loose a day of work,let alone a lawyer.
I was the only one there with a letter written by my lawyer. When it was my turn and they saw I had a lawyer ready to fight they dismissed my case with no explanation. Shameless.
Everyone takes a plea deal and they just extract millions from us. The courts, lawyers, the police.
Do not take plea deals! Fight them. Grind the courts, force them to work for it. Hold them accountable. I cant WAIT for ai to make them obsolete.
I'm betting they gas up the numbers by only considering cases that they send people to fight, and ignore these default judgement cases... that's takin' the piss, amIrite?
This was actually arguments for a speeding ticket in California.
For those in the bay area with parking tickets.... I've had hundreds of parking tickets in California. I would always lose the first round of dispute, because it is adjudicated by the city/county that gets the revenue. The second level of dispute is reviewed by an independent party who actually looks at what you state and makes an impartial decision, and the third level is reviewed by the courts. I disputed hundreds of tickets, and only once did I almost get to the third level.
Are you a lawyer who deals with traffic law, or is California just some sort of dystopia where you get hundreds of parking tickets as a normal way of life?
Up here in Ontario I've had 1 ticket in my life. I just... no, there is no way you're getting hundreds as an individual.
I had the custom plates NV and ended up in some sort of dystopia where I got hundreds of parking tickets for all different makes and models of cars until well after I returned the plates.
I originally wrote that on Craigslists rants and raves at a point in the process when I was starting to get frustrated.
Just writing it out was therapeutic, it helped me see the absurd humor in the situation. Then someone responded with a link to an LA reporter who had written about similar situations.
I contacted him, he wrote a story about me, and all the bureaucratic problems just went away.
> This is so fucking dumb. As if you need a lawyer to contest a parking ticket in the first place.
If this is dumb, then remove the regulation / requirement for the lawyer. Don't "fix" this by generating and injecting bullshit into the system and requiring that judges and everyone else now sift through the generated dross.
> This is so fucking dumb. As if you need a lawyer to contest a parking ticket in the first place.
There are plenty of jurisdictions that have all sorts of onerous rules that tilt things in favor of the prosecution because they use traffic and parking enforcement as a revenue generator. These rules are cheered on by <looks around room and gestures> because politicians and high level bureaucrats aren't idiots and know how to frame things to sell them to any given audience.
One of the very few things giving me joy lately: People being afraid of AI, feeling threatened and insecure in their capabilities, because maybe those aren't so great after all...
What a scam! AI should be used to help educate people if they want to defend themselves in court (along with a few good books), not replace professionals.
This is purely exploitative behavior from anyone offering such a service and depressingly nihilistic behavior from anyone seeking these kinds of services even if it's just fighting traffic tickets for now.
I wish all professional services had similar watchdogs and protections from unlicensed/unauthorized work!
We're at the point where leveraging technology is becoming existential. Quite literally putting every aspect of life on autopilot is not only absurd but a cancer.
If we're going to see the secular decline of certain professional services it should be at the hands of well-educated humans, not roll-of-the-dice AI. Would a well educated public not be a massive net good for society instead of exploiting the poor? What a small minded and backwards world we live in today.
> Defendant (as dictated by AI): The Supreme Court ruled in Johnson v. Smith in 1978...
> Judge: There was no case Johnson v. Smith in 1978.
LLMs hallucinate, and there is absolutely no space for hallucination in a court of law. The legal profession is perhaps the closest one to computer programming, and absolute precision is required, not a just-barely-good-enough statistical machine.