From financial robo-advisors to virtual health assistants, enterprises across every industry are leveraging virtual assistants to create outstanding customer experiences and help employees maximize their time. As artificial intelligence technology continues to advance, virtual assistants will handle more and more mundane and repetitive tasks, freeing people to devote more time and energy to more productive and creative endeavors.
But like any technology, conversational AI can pose a significant risk when it's developed and deployed improperly or irresponsibly, especially when it's used to help people navigate information related to employment, finances, physical health, and mental well-being. For enterprises and society to realize the full potential of conversational AI, we believe bots need to be designed to act responsibly and earn user trust.
Last year, to help businesses meet this challenge, we shared 10 guidelines for building responsible conversational AI. Today, we'd like to illustrate how we've applied these guidelines in our own organization and share new resources that can help developers in any industry do the same.
Responsible bot guidelines
In November 2018, Lili Cheng, corporate vice president of Microsoft AI and Research, announced guidelines designed to help organizations develop bots that build trust in their services and their brands. We created these bot guidelines based on our own experiences, our research on responsible AI and by listening to our customers and partners. The guidelines are just that - guidelines. They represent the things that we found useful to think about from the very beginning of the design process. They encourage companies and organizations to think about how their bot will interact with people and how to mitigate potential risks. Ultimately, the guidelines are all about trust, because if people don't trust the technology, they aren't going to use it.
Designing bots with these guidelines in mind
The bot guidelines have already started to play a central role in our own internal development processes. For example, our marketing team leveraged the guidelines while creating an AI-based lead qualification assistant that emails potential customers to determine their interest in Microsoft products and solutions. The assistant uses natural language processing to interact with customers, ensuring they receive the information they need or are directed to the Microsoft employee who can best help them. To provide a useful example, we've highlighted the ways in which our marketing team has approached three of the guidelines below.
Since the assistant would be customer-facing, the marketing team recognized the importance of completely thinking through every aspect of how the bot would work. Before building the lead qualification assistant, they created a vision and scope document that outlined the bot's expected tasks, technical considerations, expected benefits and end goals in terms of business performance. By outlining these details early in the design process, the team was able to focus on developing and refining only necessary capabilities and deploy the bot sooner. Creating this document also helped them identify and design for edge cases that the bot was likely to encounter and establish a set of effective reliability metrics.
While considering these edge use cases, the marketing team identified a couple of scenarios in which a handoff to a person would be required. First, if the assistant can't determine the customer's intent (for example, the response is too complex or lengthy), then the assistant will flag the conversation for a person. The person can then direct the assistant to the next best course of action or respond directly to the customer. The person also can use key phrases from the conversation to train the assistant to respond to similar situations in the future.
Secondly, the customer may ask something that the assistant doesn't have pre-programmed. For example, a student may request information about our products and solutions but not be interested in making a purchase. The assistant would flag the conversation instead of forwarding it to sales. A person can then reply through the assistant to help the student learn more.
To help ensure the bot is performing as designed, the marketing team reviews a set of reliability metrics (such as the accuracy of determining intent or conversation bounce rate) through a regularly updated dashboard. As the team updates and improves the bot, it can closely analyze the impact of each change on the bot's reliability and make adjustments as necessary.
Helping developers put the guidelines into practice
We have taken lessons learned from experiences like this one and important work from our industry-leading researchers to create actionable and comprehensive learning resources for developers.
As part of our free, online AI School, our Conversational AI learning path enables developers to start building sophisticated conversational AI agents using services such as natural language understanding or speech translation. We have recently added another module, Responsible Conversational AI, to this learning path. It covers how developers can design deeply intelligent bots and also ensure they are built in a responsible and trustworthy manner. In this learning path, developers can explore topics such as bot reliability, accessibility, security and consequential use cases and learn how to mitigate concerns that often arise with conversational AI. We have also created a Conversational AI lab in which a sample bot guides developers through a responsible conversational AI experience and explains its behavior at each point of the experience.
We encourage you to share the AI lab and the Responsible Conversational AI learning module with technical decision-makers in your organization.
You can also go to our new AI Business School to learn more about how Microsoft has integrated AI throughout our business and how your organization can do the same.