> Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt's case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.
Here's a glimpse into our Kafka-esque AI-powered future: every corporate lawyer is now making sure any customer service request will be gated by a chatbot containing a disclaimer like "Warning: the information you receive may be incorrect and irrelevant." Getting correct and relevant information from a human will be impossible.
And maybe employment contracts have language that offloads liability to an employee if they go rogue and start giving away company resources. Chatbots aren't accountable in any way and we don't know yet if their creators ever will be either.
And disclaimers are used in lots of contexts too.
I still.remember when Microsoft updated thr 360 TOS to force arbitration the day after it was deemed legal in a completely separate case.
Rest assured there is an incoming flood of TOS updates.
A Ferris Wheel operator cannot make you sign a disclaimer that they're not responsible if it collapses and kills you. Or rather, they can, but it will not hold up in court.
Similarly, you can say in your manual, "We're not responsible for anything we say here" but you still are.
I don't know about chatbots, but I'd expect that judges will look for other precedents that are analogous.
Now, if you are talking about 'Full Self Driving' - then yea, there's a waiver and a point there.
The chatbot's error cost them what, $200? And it probably replaced at $100000/year employee?
if you read them there is often stuff like that, the most flagrant one I read said “everything above should be considered apocryphal”
I don’t even need it to tell me anything. Links are all that is relevant. Google Analytics on the Web does something similar. You can ask questions in the search box and it takes you to a relevant page.
“Can I get refund on my flight 2 hours in advance?”
“Here is a link to refund policies w.r.t time before flight”
> Air Canada’s argument was that because the chatbot response included a link to a page on the site outlining the policy correctly, Moffat should’ve known better.
[1] https://techhq.com/2024/02/air-canada-refund-for-customer-wh...
AC and other corporations would do well to put the brakes on this instead. identify ways to transfer risk (AI Insurance for example) or avoid risk (scrap the AI bot until the risk is lowered demonstrably.)
savvy advertisers would jump on this opportunity to show just how much AC cares about the customer and eat the loss quietly before it ever went to trial.
This is very reasonable-- AI or not, companies can't expect consumers to know which parts of their digital experience are accurate and which aren't.
> Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives
That includes EMPLOYEES. So they tried to argue that their employees can lie to your face to get you to buy a ticket under false pretense and then refuse to honor the promised terms? That's absolutely fucked.
I once booked a flight to meet my then-fiancee in Florida on vacation. Work travel came up unexpectedly, and I booked my work travel from ORD > SFO > TPA.
Before I made that booking, I called the airline specifically to ask them if skipping the ORD > TPA leg of my personal travel was going to cause me problems. The agent confirmed, twice that it would not. This was a lie.
Buried in the booking terms is language meant to discourage gaming the system by booking travel where you skip certain legs. So if you skip a leg of your booking, the whole thing is invalidated. It's not suuuuper clear, I had to read it a few times, but I guess it kinda said that.
Anyways - my itinerary was invalidated by skipping the first flight, and I got lucky enough that someone canceled at the last minute and I could buy my own seat back on the now-full flight for 4x the original ticket price I paid (which was not refunded!).
I followed up to try and get to the bottom of it, but they were insistent they had no record of my call prior, and just fell back on "It's in the terms, and I do not know why you were told wrong information". Very painful lesson to learn.
I try and make a habit of recording phone conversations with agents now, if legal in where I'm physically located at the time.
Pretty standard behavior for big companies. Airlines and telcos are the utter worst... you have agent A on the phone on Monday, who promises X to be done by Wednesday. Thursday, you call again, get agent B, who says he doesn't see anything, not even a call log, from you, but of course he apologizes and it will be done by Friday. (Experienced customers of telcos will know that the drama will unfold that way for months... until you're fed up and involve lawyers)
It's the degree of misinformation that's relevant.
I wrote back that unless they issused a refund, I would issue a charge back. You don't get to present the customer with one thing and then do otherwise because you say so on a page the customer has never read when ordering.
They eventually caved, but man, the nerve.
E.g., did they tell you the shipping date after you placed the order, or before? If it was afterward, then it can't have invalidated the contract... you agreed to it without knowing when it would ship. If they told you before, then was it before they knew your shipping address, or after? If it was beforehand, then again, it should've been clear that they wouldn't be able to guarantee it without knowing the address. If it was after they got the address but before you placed the order, then that makes for a strong case, since it was specific to your order and what you agreed to before placing it.
If the company doesn't agree to that, then they need to show the employee was trained on company policy and was disciplined (on first offense maybe just a warning, but this needs to be a clear step on the path to firing the employee) for failing to follow it. Even then they should stand by their employee if the thing said was reasonable (refund you $million may be unreasonable, but refund purchase price is reasonable)
It’s not the consumers fault that the AI hallucinated a result (as they are known to do with high frequency).
Real legal comedy. Since this was in small claims court maybe it was an amateur on Air Canada's side?
Same with chatbots. Even better, because once it's "trained", you don't have to pay it.
There's a few instances of expecting digital entities to shoulder the entirety of legal liability here in the last few years; DAOs are another example of this in the crypto space.
"This story originally appeared on Ars Technica."
Give the clicks to the original article:
https://arstechnica.com/tech-policy/2024/02/air-canada-must-...
It worked in the British Post Office Scandal: https://en.m.wikipedia.org/wiki/British_Post_Office_scandal
And AFAICT "the computer did it" wasn't the argument, it was "the computer did it so it must be correct because the experts said so".
With Air Canada, the question is whether or not a chat bot can be treated as a company representative that makes binding commitments.
With the British Post Office, the issue is whether or not a software system is inscrutable during legal proceedings.
The resolution is n amazingly clear piece of legal writing that explains the involved thought process of the the decision and then awarding the damages. I might end up using this pattern for writing out cause and effect.
After all, we're still not 100% sure how LLMs make their decisions in what they string together as output, so the company's not _technically_ lying.
Chatbot: <waffle>
Me: Please put me through to a person that can articulate $COMPANY's legal position. This conversation can serve no more purpose.
The human on the other end rescheduled and gave me a bereavement rate. She told me it was less money, but didn't mention the reason. I didn't put that together until later. She just helped me out because she had compassion.
I am too cynical to think that an AI controlled by a corporation will do this.
We live in an interesting world. In the US, a corporation is legally a person, and a chatbot is not a person[0]. I'm looking forward to the first Supreme Court case involving a corporation consisting of chatbots.
[0] I'm handwaving in this lead-in to the fantasy here, so, dear reader, please give me a break for oversimplifying and ignoring technicalities.
Or in this case they need to take the AI out of service immediately until they can get a corrected version that does not do such a thing. I will accept that the AI can be tricked to do such a thing and remain in service, but only if they can show the tricks are something an honest human wouldn't attempt. (I don't know what this is, but I'll allow the idea for someone else to propose in enough detail that we can debate if a honest people would ever do that)
Now, in the medium or long term, I expect there to be AIs that will be able to do this sort of thing just fine. As I like to say I expect future AIs will not "be" LLMs but merely use LLMs as one of their component parts, and the design as a whole will in fact be able to accurately and reliably relay corporate policies as a result. But the stock market is not currently priced based on "AIs will be pretty awesome in 2029", they're priced on "AIs are going to be pretty awesome in July".
LLMs are a huge step forward, but they really aren't suitable for a lot of uses people are trying to put them to in the near term. They don't really "know" things, they're really, really good at guessing them. Now, I don't mean this in the somewhat tedious "what is knowing anyhow" sense, I mean that they really don't have any sort of "facts" in them, just really, really good language skills. I fully expect that people are working on this and the problem will be solved in some manner and we will be able to say that there is an AI design that "knows" things. For instance, see this: https://deepmind.google/discover/blog/alphageometry-an-olymp... That's in the direction of what I'm talking about; this system does not just babble things that "look" or "sound" like geometry proofs, it "knows" it is doing geometry proofs. This is not quite ready to be fed a corporate policy document, but it is in that direction. But that's got some work to be done yet.
(And again, I'm really not interested in another rehash of what "knows" really means. In this specific case I'm speaking of the vector from "a language model" and "a language model + something else like a symbolic engine" as described in that post, where I'm simply defining the latter as "knowing" more about geometry than the former.)
In other words, this was possibly the first historical argument made in a court that AI's are sentient and not automated chattel.
Also:
> Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions,"
What does this mean?
That the chatbot was provided by a third party hence they are responsible for the content provided?
Or that, literally, a chat bot can be considered a legal entity?
edit: arguing that the chatbot is a separate legal entity is a wild claim. It would imply to me that air canada could sue the ai company for damages if it makes bad promises; not that air canada is excused from paying the customer.
I bet they already have policies in place for this - while how they apply to AI may be different, they shouldn't let this slide.
The common example in textbooks is someone continuing to do business as an employee after having been fired. They can still make valid deals with other entities due to apparent authority if they're not clearly made out to be separated.
That might be too wishful thinking. The tribunal would take into account damages, and whether it was reasonable to believe that you're entitled to free service.
In this case, the chatbot promised a ~$800 discount, and the tribunal awarded ~$800. But I doubt they'd make the same decision again, or deem the lifetime service enforceable/un-cancellable.
It may feel weird, but it's utterly insane to delegate customer interactions to an agent that has nobody's interest in mind, not even their own, whom you cannot trust to abide by policy.
Similarly, if the bot negotiated any sort of special deal, I think it would be very, very difficult to argue that it lacked apparent authority to make deals or that it's not a fair consideration.
I'm working on something to make this easier - reach out if I can be helpful (email in bio).
i think the only reason this should go through is if it didn't have a proper disclaimer at the beginning of the conversation
I don't know what the solution looks like. Maybe some combination of courts only upholding "reasonable" claims by AI. And then insurance to cover the gaps?
In this case this could have just been a $650 'bug bounty' had Air Canada issued a quick refund. A reasonable QA expense to find out that your AI agent is misleading your customers.
I don’t think failing to take adequate precautions is preventing AI tools from being used. I think this was plain corporate incompetence and greediness. They started using a system without properly testing it and don’t want to pay for the consequences.
What if Boeing says “Oops. We forgot to put the bolts that keeps the door in place but we shouldn’t be kept accountable for our actions”? The fact that they used a tool for it shouldn’t change the outcome unless we are going to create indemnity for big cooperations.
This sounds like an ideal outcome!