Log in
E-mail
Password
Remember
Forgot password ?
Become a member for free
Sign up
Sign up
Settings
Settings
Dynamic quotes 
OFFON

MarketScreener Homepage  >  Equities  >  Nasdaq  >  Microsoft Corporation    MSFT

MICROSOFT CORPORATION

(MSFT)
  Report  
SummaryQuotesChartsNewsRatingsCalendarCompanyFinancialsConsensusRevisions 
News SummaryMost relevantAll newsOfficial PublicationsSector newsMarketScreener StrategiesAnalyst Recommendations
The feature you requested does not exist. However, we suggest the following feature:

Microsoft : Building responsible and trustworthy conversational AI

share with twitter share with LinkedIn share with facebook
share via e-mail
0
05/17/2019 | 12:08pm EDT

From financial robo-advisors to virtual health assistants, enterprises across every industry are leveraging virtual assistants to create outstanding customer experiences and help employees maximize their time. As artificial intelligence technology continues to advance, virtual assistants will handle more and more mundane and repetitive tasks, freeing people to devote more time and energy to more productive and creative endeavors.

But like any technology, conversational AI can pose a significant risk when it's developed and deployed improperly or irresponsibly, especially when it's used to help people navigate information related to employment, finances, physical health, and mental well-being. For enterprises and society to realize the full potential of conversational AI, we believe bots need to be designed to act responsibly and earn user trust.

Last year, to help businesses meet this challenge, we shared 10 guidelines for building responsible conversational AI. Today, we'd like to illustrate how we've applied these guidelines in our own organization and share new resources that can help developers in any industry do the same.

Responsible bot guidelines

In November 2018, Lili Cheng, corporate vice president of Microsoft AI and Research, announced guidelines designed to help organizations develop bots that build trust in their services and their brands. We created these bot guidelines based on our own experiences, our research on responsible AI and by listening to our customers and partners. The guidelines are just that - guidelines. They represent the things that we found useful to think about from the very beginning of the design process. They encourage companies and organizations to think about how their bot will interact with people and how to mitigate potential risks. Ultimately, the guidelines are all about trust, because if people don't trust the technology, they aren't going to use it.

Designing bots with these guidelines in mind

The bot guidelines have already started to play a central role in our own internal development processes. For example, our marketing team leveraged the guidelines while creating an AI-based lead qualification assistant that emails potential customers to determine their interest in Microsoft products and solutions. The assistant uses natural language processing to interact with customers, ensuring they receive the information they need or are directed to the Microsoft employee who can best help them. To provide a useful example, we've highlighted the ways in which our marketing team has approached three of the guidelines below.

  • Articulate the purpose of your bot and take special care if your bot will support consequential use cases.

Since the assistant would be customer-facing, the marketing team recognized the importance of completely thinking through every aspect of how the bot would work. Before building the lead qualification assistant, they created a vision and scope document that outlined the bot's expected tasks, technical considerations, expected benefits and end goals in terms of business performance. By outlining these details early in the design process, the team was able to focus on developing and refining only necessary capabilities and deploy the bot sooner. Creating this document also helped them identify and design for edge cases that the bot was likely to encounter and establish a set of effective reliability metrics.

  • Ensure a seamless hand-off to a person where the person-bot exchange leads to interactions that exceed the bot's competence.

While considering these edge use cases, the marketing team identified a couple of scenarios in which a handoff to a person would be required. First, if the assistant can't determine the customer's intent (for example, the response is too complex or lengthy), then the assistant will flag the conversation for a person. The person can then direct the assistant to the next best course of action or respond directly to the customer. The person also can use key phrases from the conversation to train the assistant to respond to similar situations in the future.

Secondly, the customer may ask something that the assistant doesn't have pre-programmed. For example, a student may request information about our products and solutions but not be interested in making a purchase. The assistant would flag the conversation instead of forwarding it to sales. A person can then reply through the assistant to help the student learn more.

  • Ensure your bot is reliable

To help ensure the bot is performing as designed, the marketing team reviews a set of reliability metrics (such as the accuracy of determining intent or conversation bounce rate) through a regularly updated dashboard. As the team updates and improves the bot, it can closely analyze the impact of each change on the bot's reliability and make adjustments as necessary.

Helping developers put the guidelines into practice

We have taken lessons learned from experiences like this one and important work from our industry-leading researchers to create actionable and comprehensive learning resources for developers.

As part of our free, online AI School, our Conversational AI learning path enables developers to start building sophisticated conversational AI agents using services such as natural language understanding or speech translation. We have recently added another module, Responsible Conversational AI, to this learning path. It covers how developers can design deeply intelligent bots and also ensure they are built in a responsible and trustworthy manner. In this learning path, developers can explore topics such as bot reliability, accessibility, security and consequential use cases and learn how to mitigate concerns that often arise with conversational AI. We have also created a Conversational AI lab in which a sample bot guides developers through a responsible conversational AI experience and explains its behavior at each point of the experience.

Learn more

We encourage you to share the AI lab and the Responsible Conversational AI learning module with technical decision-makers in your organization.

You can also go to our new AI Business School to learn more about how Microsoft has integrated AI throughout our business and how your organization can do the same.

Related

Disclaimer

Microsoft Corporation published this content on 17 May 2019 and is solely responsible for the information contained herein. Distributed by Public, unedited and unaltered, on 17 May 2019 16:07:04 UTC

share with twitter share with LinkedIn share with facebook
share via e-mail
0
Latest news on MICROSOFT CORPORATION
05/17MICROSOFT : Building responsible and trustworthy conversational AI
PU
05/17NOW LIVE : Microsoft Business Applications Summit 2019 session catalog
PU
05/17MICROSOFT : Minecraft celebrates 10 years with ‘Minecraft Earth' announcem..
PU
05/17ASIA MARKETS: Stocks Gain In Japan, But Trade Tensions Weigh On Hong Kong, Ma..
DJ
05/17GLOBAL MARKETS LIVE : Nissan, Amazon, Microsoft, Sony…
05/17MICROSOFT : and General Assembly launch partnership to close the global AI skill..
PR
05/17MICROSOFT : Sony to Partner on Cloud Technology
DJ
05/17Microsoft, Sony partner on streaming games, chips and AI
RE
05/16MICROSOFT : Building the inclusive workplace we imagine, together
PU
05/16MICROSOFT : May 2019 Xbox Update includes improvements to your friends list, mes..
PU
More news
Financials ($)
Sales 2019 125 B
EBIT 2019 41 757 M
Net income 2019 35 070 M
Finance 2019 63 241 M
Yield 2019 1,42%
P/E ratio 2019 28,24
P/E ratio 2020 25,11
EV / Sales 2019 7,35x
EV / Sales 2020 6,55x
Capitalization 981 B
Chart MICROSOFT CORPORATION
Duration : Period :
Microsoft Corporation Technical Analysis Chart | MarketScreener
Full-screen chart
Technical analysis trends MICROSOFT CORPORATION
Short TermMid-TermLong Term
TrendsNeutralBullishBullish
Income Statement Evolution
Consensus
Sell
Buy
Mean consensus OUTPERFORM
Number of Analysts 35
Average target price 143 $
Spread / Average Target 12%
EPS Revisions
Managers
NameTitle
Satya Nadella Chief Executive Officer & Director
Bradford L. Smith President & Chief Legal Officer
John Wendell Thompson Independent Chairman
Jean-Philippe Courtois President-Global Sales, Marketing & Operations
Amy E. Hood Chief Financial Officer & Executive Vice President
Sector and Competitors
1st jan.Capitalization (M$)
MICROSOFT CORPORATION26.09%981 377
RED HAT5.62%32 975
ATLASSIAN CORPORATION PLC45.15%31 133
ZOOM VIDEO COMMUNICATIONS INC0.00%23 070
SPLUNK INC30.18%20 496
CADENCE DESIGN SYSTEMS INC.57.91%19 296