As the AI gets better people will trust it with more and more kinds of cases and cases with more increased complexity. If people want to pay for a real licensed lawyer they are still able to do so.
AI is just informed search, a dwarf sitting on shoulders of human knowledge. There were medical "expert systems" in 2000s, yet we still have doctors.
In my understanding, in most cases AI will be a glorified assistant, not an authoritative decision-maker. Otherwise in collides head-on with barriers and semis. I won't trust such system even with a parking ticket, yet alone my life.
We're just at the top of a hype-cycle now. AI can do new things, but not as well as we dream or hope.
Any kind of assistant can make mistakes. But a human assistant can be made to show their work and explain their reasoning to check their output. If ChatGPT says "this thing is totally legal" or "don't worry about that rash", how am I to validate it's "reasoning"? How do I know where it's drawing it's inference from?