What makes a solution too smart?

In conversations about developing AI, this question always seems to scratch its way up to the surface. And with good reason - from self-driving systems to loan application processes, a majority of Americans distrust the use of artificial intelligence in places where it is frequently used. Concerns range from a simple lack of information about how AI makes decisions to who is accountable for flaws in its algorithms.

But at the same time, this technology has the incredible and unique potential to solve problems both big and small in real time. This potential is why global spending on AI systems is only expected to increase in the years ahead; from $85.3 billion in 2021 to over $204 billion in 2025.

With the market moving forward, and consumers needing reassurance, we believe that ethical AI principles must serve as the compass in any product development journey.

Trust must guide AI development

AI's value rests on trust - particularly for a business with the goal of living on the leading edge of smart technology. At Salesforce, the product pushing the algorithmic frontier is Einstein, a tool that uses advanced modeling and collected data to deliver targeted, personalized, and actionable results in CRM for all business users.

If that sounds fairly complex, it is. And with complexity comes the potential for confusion and misuse - even when no ill intent is involved.

There's no getting around the reality that AI comes with risk. For Einstein's designers and marketing experts, acknowledging that risk is the first, vital step towards addressing it.

A human-centered approach

When we collectively think about AI, it's easy to imagine the technology as a sort of monolithic landscape of code - inputs and outputs with a murky sea of uncertainty between them. In reality, all AI systems are products of people, embodying the diversity and complexity of their creators. The choices designers make about how to set up, manage, and explain their solutions are as human as any decision about a product or service. Essentially, AI is the product of human designers, and those designers bring their own assumptions about the world into their work.

To address the potential for unintentional bias, Salesforce prioritizes built-in product guardrails that help users identify high-risk processes, assess the potential for harm, and mitigate risks strategically while still meeting their business needs.

What do those guardrails actually look like? The idea behind them is simple: if you want people to trust your AI, give them the information they need to know it's trustworthy. For example:

  • AI model cards, which act like a nutrition label for models, the algorithms that govern AI decision making.
  • Tools to evaluate data quality, ensuring that algorithms have the most representative and efficient information from which to learn.
  • Bias flags, which alert users when a potential field, like ZIP code, may create unwanted bias in their data.
  • Data collection guidance, which helps users make informed decisions about what information they obtain, how they obtain it, and what they do with it afterward.
Prioritizing trust over technicalities

Salesforce's guardrails introduce a certain amount of friction into the user experience. When a piece of information seems useful, it might take time to understand why it's being flagged as a potential source of bias, and even more time to learn the best way to deal with it. Salesforce provides a number of resources that help users navigate these kinds of flags and how to best achieve their strategic goals while being mindful of the people whose data they depend on.

But the value at the heart of those guardrails is trust. Inserting thoughtful pauses and tools that help users - and their customers - feel comfortable with AI's role in personalizing and optimizing experiences is an essential bridge to a world where this technology earns the confidence of most consumers.

That principle of trust goes beyond AI, too. At Salesforce, it permeates our human-centric philosophy about tech. We believe there are never too many tools to empower people to make positive change with purpose-built solutions.

Dig deeper
  • For more on Salesforce's approach to AI guardrails, download A Marketer's Guide to the Trusted Use of Einstein.
  • Read how Salesforce taps both ethics and inclusion to drive innovation in Tech Needs a Seatbelt: How Ethics and Inclusion Drive Innovation at Salesforce.
  • Learn more about Salesforce's Trusted AI Principles.
  • Share
  • Share on Email
  • Share on Twitter
  • Share on Facebook
  • Share on LinkedIn
Tags Artificial Intelligence Snapshots Trust

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

salesforce.com Inc. published this content on 19 April 2022 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 19 April 2022 16:23:07 UTC.