In the ever-evolving landscape of travel, artificial intelligence (AI) has become an indispensable companion. From booking flights to answering queries, chatbots have revolutionized customer interactions.

But what happens when these digital helpers take a detour into misinformation territory? The recent case of Air Canada's chatbot provides valuable insights for businesses navigating the AI frontier.

The Air Canada Debacle

Picture this: Jake Moffatt, a grieving grandson, seeks to book a last-minute flight for his grandmother's funeral. Air Canada's chatbot confidently assures him that he can secure a bereavement fare after booking a full-fare flight.

Moffatt trusts the chatbot, only to discover later that the promised discount doesn't exist. The airline's response? The chatbot is a "separate legal entity" responsible for its own actions.

AI Accountability

The British Columbia Civil Resolution Tribunal didn't buy Air Canada's argument. Instead, they ruled in favor of Moffatt, ordering the airline to pay him damages and fees.

The tribunal's decision sends a clear message: Companies cannot hide behind their chatbots. If you're handing over part of your business to AI, you're accountable for its deeds.

Gabor Lukacs, president of the Air Passenger Rights consumer advocacy group, calls this case a landmark. It sets a precedent for airlines and travel companies relying on AI. Yes, companies are liable for what their tech says and does. The principle is straightforward: If you unleash AI, you shoulder the consequences.

AI Hallucinations and the Risks

Air Canada isn't alone in its AI adventures. In 2018, a WestJet chatbot inexplicably directed a passenger to a suicide prevention hotline. Such blunders, known as "AI hallucinations," highlight the risks of overreliance on AI. As more travel companies embrace AI, they must tread carefully.

Lessons for Businesses

    Transparency Matters: When AI interacts with customers, transparency is nonnegotiable. Airlines or, indeed, any business must ensure that chatbots provide accurate information and don't lead customers astray. A chatbot's actions reflect on the company as a whole.
  • Human Oversight: While AI streamlines processes, human oversight remains crucial. Regular audits and quality checks prevent chatbots from veering off course. After all, even the smartest algorithms can have a glitchy day.
  • Legal Responsibility: The tribunal's ruling underscores legal responsibility. Businesses must understand that AI isn't a scapegoat. If your chatbot misfires, you're on the hook.
  • Customer Trust: Trust is fragile. A misguided chatbot can erode it swiftly. Prioritize accuracy, empathy and reliability in your AI systems.
  • The Chatbot Revolution Continues

    Beyond airlines, other travel giants are also diving into AI. Expedia's ChatGPT plug-in assists with trip planning, while AI hallucinations keep us on our toes. As businesses embrace AI, they must remember: Chatbots aren't just lines of code; they're ambassadors of your brand.

    So, if you are thinking of implementing a chatbot to interact with customers, remember Moffatt's saga. Behind the digital façade lies a world of responsibility.

    The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Matthew Richardson
Brown Rudnick LLP
One Financial Center Boston
Boston
MA 02111
UNITED STATES
Tel: 617856 8200
Fax: 617856 8201
E-mail: jbennett@brownrudnick.com
URL: www.brownrudnick.com

© Mondaq Ltd, 2024 - Tel. +44 (0)20 8544 8300 - http://www.mondaq.com, source Business Briefing