1. To make those harmed whole. On this, you have a good point. The desire of AI firms or those using AI to be indemnified from the harms their use of AI causes is a problem as they will harm people. But it isn't relevant to the question of whether LLMs are useful or whether they beat a human.
2. To incentivize the human to behave properly. This is moot with LLMs. There is no laziness or competing incentive for them.
That’s not a positive at all, the complete opposite. It’s not about laziness but being able to somewhat accurately estimate and balance risk/benefit ratio.
The fact that making a wrong decision would have significant costs for you and other people should have a significant influence on decision making.
1. To make those harmed whole. On this, you have a good point. The desire of AI firms or those using AI to be indemnified from the harms their use of AI causes is a problem as they will harm people. But it isn't relevant to the question of whether LLMs are useful or whether they beat a human.
2. To incentivize the human to behave properly. This is moot with LLMs. There is no laziness or competing incentive for them.