When a customer suffers a financial loss or other harm as a result of misinformation negligently communicated by a company, we'd hope that the business would accept responsibility and offer the customer appropriate compensation.
But what if a company tries to shift blame elsewhere? Fortunately, consumer protection laws and other common law remedies have restricted some of the avenues used in an effort to escape liability. Unfortunately, one company recently tried a novel argument in court that could have opened up a new one.
"It was the employee's error, not ours."
-
Perhaps, but as an employer you have vicarious liability for an employee's actions in the course of their work.
- If a company that is sued by a customer for negligent misrepresentation attempts to sue an AI chatbot manufacturer, could the customer expand the claim to include this third party in the absence of any contractual privity between them?
- How would/could a court determine whether an AI chatbot had been properly programmed or trained?
- How will disclaimers, terms of service, waivers, and consumer protection laws evolve as AI becomes prevalent?
"It was a computer malfunction."
-
Nevertheless, common law has generally found that a company is responsible for any acts or omissions (including misrepresentations) from a computer system it uses.
"The negligent party is a separate legal entity."
-
Hmmm, that can be more of a grey area. Tell me more. Who is the negligent party?
"A chatbot on our website."
In this blog post, I examine a widely-publicized small claims court case between
While the stakes were relatively small in monetary terms, this type of argument may be used again. I suggest that we closely monitor corporate attempts to evade liability when using artificial intelligence technology to interact with or serve their clients.
Moffatt v.
In late 2022,
He was informed he could purchase tickets at a reduced rate, or at least receive a partial reimbursement of the full cost of a ticket already purchased if he submitted his claim within 90 days of the travel date. A link to a website outlining the airline's bereavement policy was included in the chatbot's reply. That policy noted: "Please be aware that our Bereavement policy does not allow refunds for travel that has already happened."
Which source should he believe? The information provided in the chat window on the airline's website, or the written policy that was linked by the chatbot to that same website?
Jake relied on the former over the latter.
In its defence,
Using this rationale, and drawing an adverse inference from
It is exceedingly rare for a small claims court case to draw international attention. However, many news agencies - both foreign and domestic - picked up the story because they incorrectly assumed
While the facts of this case may not have been exactly what reporters were looking for as they chased stories involving the deleterious effects of artificial intelligence, it does offer the legal community an opportunity to reflect on liability in the brave new world of artificial intelligence.
Liability and Efforts to Limit Liability.
Companies that employ people or technology in the course of their business bear a certain amount of responsibility for their acts or omissions. Good training practices, quality control assurance, and appropriate supervisory management of employees and machinery should greatly reduce a company's risk of being found vicariously liable for negligence causing harm.
Of course, businesses are customers themselves. When they purchase technology produced by outside sources for use in their businesses, they should expect the technology to work as intended. If these products are defective in design or manufacture, or if the manufacturer has not provided adequate user instruction or warnings of foreseeable potential hazards, the company that produced or sold the technology could be liable for certain damages caused to a business using the product.
There are, of course, ways that a company might attempt to limit its liability through contractual language. But even in those cases, courts would likely examine whether the language used in such contracts or terms of service is comprehensible, clearly-defined, and compliant with all applicable laws.
In the
If there was a manufacturing defect in the chatbot's programming, or if the firm selling the chatbot technology failed in its own duty to instruct/teach the purchaser how to properly use it, presumably the airline could sue the manufacturer and/or seller for any foreseeable losses sustained. In the
There was, however, a suggestion that the chatbot was essentially its own legal entity. Such an assertion should give all of us pause. In an age when the science fiction behind Max Headroom is quickly becoming fact, we need to think carefully about the consequences of absolving AI makers and users from responsibility for the actions of their creation (or their tools).
A Product With a Mind of Its Own.
Early generations of chatbots and automated attendants on phone lines were adept at answering simple questions or directing customer inquiries to the appropriate department for further discussion with humans. Inquiries that required more complex or nuanced responses were not as promising. While common questions could be anticipated, responses to deviations would often falter.
With artificial intelligence technology now advancing at a rapid pace, chatbots and their like will soon not only be able to answer questions with far more precision; they will be able to learn from their own experiences interacting with people and adjust their output accordingly.
As readers of this blog will know, I am a strong proponent of the use of technological advances to help people. We should not instinctively fear or reject artificial intelligence. Technology is not inherently good or bad - it's a tool that we can use to do certain things. Of course, the trade-off when using technology to do good is bearing responsibility for it, taking steps to mitigate any foreseeable adverse consequences, and repairing any harm done.
To consider early generation chatbots (or any AI-equipped version planned or currently in operation) to be separate legal entities completely responsible for their own actions, would be to absolve oneself from any part in their creation, instructional programming or directives. The Tribunal's incredulous reaction to this submission in the
Imagine if we were to ask an artificial intelligence product to provide a literary analogy of this argument through a parody of a popular work of science fiction:
In
A literary masterpiece this is not; nor is its inspiration a legal argument we should ever accept as meritorious.
The Bottom Line: Ensure Accuracy and Take Precautions.
As society begins to realize the full potential (and potential problems) of this technological breakthrough, the legal profession will be looking for answers to some fascinating questions relating to liability and the direction of case law. For example:
-
Will courts continue to view AI technology as equivalent to technology without generative capability, or would vicarious liability similar to an employment context eventually apply?
When new technology is employed, courts generally assign the burden of risk to the company or entity delivering the technology, not to the consumer as an end-user. Whether that burden will shift or evolve into a "buyer beware" model as consumers become more familiar with this technology and its risks, remains to be seen. In my view, it is in a consumer's best interests to have a simple and straightforward idea of who should be liable for negligence involving artificial intelligence technology: the company using it.
Until we know the ultimate direction the courts will take, the
When a company is found negligent in carrying out its specific duty to clients, this can be quite costly. An ounce of prevention is always better for a company's bottom line (and reputation) than a pound of cure.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
Mr
Gluckstein Personal Injury Lawyers
P.O. Box 53
M5G 2C2
Tel: 416408.4252
Fax: 416408.4235
E-mail: info@gluckstein.com
URL: www.gluckstein.com
© Mondaq Ltd, 2024 - Tel. +44 (0)20 8544 8300 - http://www.mondaq.com, source