Exactly. I have said several times that the largest and most lucrative market for AI and agents in general is liability-laundering.
It's just that you can't advertise that, or you ruin the service.
And it already does work. See the sweet, sweet deal Anthropic got recently (and if you think $1.5B isn't a good deal, look at the range of of compensation they could have been subject to had they gone to court and lost).
Remember the story about Replit's LLM deleting a production database? All the stories were AI goes rogue, AI deletes database, etc.
If an Amazon RDS database was just wiped a production DB out of nowhere, with no reason, the story wouldn't be "Rogue hosted database service deletes DB" it would be "AWS randomly deletes production DB" (and, AWS would take a serious reputational hit because of that).
If I am a company that builds agents, and I sell it to someone.
Then, that someone loses money because this agent did something it wasn't supposed to: who's responsible?
Me as the person who sold it? OpenAI who I use below? Anthropic who performs some of the work too? My customer responsible themselves?
These are questions that classic contracts don't usually cover because things tend to be more deterministic with static code.
> These are questions that classic contracts don't usually cover because things tend to be more deterministic with static code.
Why? You have a delivery and you entered into some guarantees as part of the contract. Whether you use an agent, or roll a dice - you are responsible for upholding the guarantees you entered into as part of the contract. If you want to offload that guarantee, then you need to state it in the contract. Basically, what the MIT Licenses do: "No guarantees, not even fitness for purpose". Whether someone is willing to pay for something where you enter no liability for anything is an open question.
Technically that's what you do when you google or ask chatgpt something, right? They make no explicit guarantees that any of what is provided back is true, correct or even reasonable. you are responsible for it.
It's you. You contracted with someone to make them a product. Maybe you can go sue your subcontractors for providing bad components if you think you've got a case, but unless your contract specifies otherwise it's your fault if you use faulty components and deliver a faulty product.
If I make roller skates and I use a bearing that results in the wheels falling off at speed and someone gets hurt, they don't sue the ball bearing manufacturer. They sue me.
Agreeing with the others. It's you. Like my initial house example, if I make a contract with *you* to build the house, you provide me a house. If you don't, I sue you. If it's not your fault, you sue them. But that's not my problem. I'm not going to sue the person who planted the tree, harvested the tree, sawed the tree, etc etc if the house falls down. That's on you for choosing bad suppliers.
If you chose OpenAI to be the one running your model, that's your choice not mine. If your contract with them has a clause that they pay you if they mess up, great for you. Otherwise, that's the risk you took choosing them
In your first paragraph, you talk about general contractors and construction. In the construction industry, general contractors have access to commercial general liability insurance; CGL is required for most bids.
Maybe I'm not privy to the minutae, but there are websites talking about insurance for software developers. Could be something. Never seen anyone talk about it though
If I am a company that builds technical solutions, and I sell it to someone. Then, that someone loses money because the solution did something it wasn't supposed to: who's responsible?
Me as the person who sold it? The vendor of a core library I use? AWS who hosts it? Is my customer responsible themselves?
These are questions that classic contracts typically cover and the legal system is used to dealing with, because technical solutions have always had bugs and do unexpected things from time to time.
If your technical solution is inherently unreliable due to the nature of the problem it's solving (because it's an antivirus or firewall which tries its best to detect and stop malicious behavior but can't stop everything, because it's a DDoS protection service which can stop DDoS attacks up to a certain magnitude, because it's providing satellite Internet connectivity and your satellite network doesn't have perfect coverage, or because it uses a language model which by its nature can behave in unintended ways), then there will be language in the contract which clearly defines what you guarantee and what you do not guarantee.
Did you, the company who built and sold this SaaS product, offer and agree to provide the service your customers paid you for?
Did your product fail to render those services? Or do damage to the customer by operating outside of the boundaries of your agreement?
There is no difference between "Company A did not fulfill the services they agreed to fulfill" and "Company A's product did not fulfill the services they agreed to fulfill", therefore there is no difference between "Company A's product, in the category of AI agents, did not fulfill the services they agreed to fulfill."
Well, that depends on what we are selling. Are you selling the service, black-box, to accomplish the outcome? Or are you selling a tool. If you sell a hammer you aren't liable as the manufacturer if the purchaser murders someone with it. You might be liable if when swinging back it falls apart and maims someone - due to the unexpected defect - but also only for a reasonable timeframe and under reasonable usage conditions.
I don't see how your analogy is relevant, even though I agree with it. If you sell hammers or rent them as a hammer providing service, there's no difference except likely the duration of liability
There difference isn't renting or selling a hammer. The difference is providing a hammer (rent/sell) VS providing a handyman that will use the hammer.
In the first case the manufacturer is only liable for defects, for normal use of the tool. So the manufacturer is NOT liable for misuse.
In the second case, the service provider IS liable for misuse of the tool. If they say, break down a whole wall for some odd reasons when making a repair, they would be liable.
In both cases there is a separation between user/manufacturer liability - but the question relevant to AI and SaaS is just that. Are you providing the tool, or delivering the service in question? In many cases, the fact the product provided is SaaS doesn't help - what you are getting is "tool as a service."
Yes they do, adding "plus AI" changes nothing about contract law, OAI is not giving you idemification for crap and you cant assign liability like that anyway.
I think people want to assign responsibility to the "agent" to wash their hands in various ways. I can't see it working though