By Evan Ramzipoor, Workflow contributor

Financial institutions are quickly adopting AI to expand access to less wealthy, nontraditional customers. However, with this technology come questions about whether AI is inadvertently introducing biases into financial decision-making.

In response, AI researchers have designed guidelines, recommendations, checklists, and other frameworks to ensure the use of AI in finance is fair to customers. Yet that effort has revealed that defining fairness is exceedingly hard. Instead, organizations have focused more on reducing potential harms caused by AI and less on eliminating bias entirely.

The business potential is vast. In the United States, about 1 in 4 Americans is underbanked and can't apply for traditional loans. In Mexico, two-thirds of adults don't have a bank account. Across the African continent, most people don't have a bank account or credit score.

1 in 4

Number of Americans who are underbanked

These communities are often referred to as "underbanked" because they can't access services like mortgages and credit cards that richer consumers take for granted. Such people "lack the traditional identification, collateral, or credit history-or all three-needed to access financial services," says Margarete Biallas of the International Finance Corporation, a member organization of the World Bank.

Extending financial access with AI

In the early 2000s, Biallas, an economist and digital finance practice lead, helped garment manufacturers in Cambodia set up mobile-payment options. Prior to the IFC's involvement, the garment workers, who are mostly women, received their wages as cash in envelopes at the factory, creating the potential for robbery and violence against them, Biallas adds. To resolve this issue, the IFC worked with manufacturers and Melbourne-based ANZ Bank to set up digital-payment options so workers could be paid on their mobile devices.

[Tom Davenport: Enterprise AI is starting to pay off. Here's why.]

The organization then rolled out an AI-enabled credit scoring system for workers to apply for loans despite their lack of traditional credit scores. The AI system analyzes data that isn't usually included in a credit score, such as total income, work history, and how frequently a borrower spends money on non-essential items, like jewelry or electronics. By collecting data from mobile phones and satellites, banks can verify the identity and creditworthiness of individuals and businesses. AI can use satellite data to establish an employment history by proving that someone was working at a specific farm or factory, which can be corroborated with data from a cell phone they carried at the time, Biallas explained in a 2020 IFC report.

Experts are concerned about the potential bias and fairness issues that arise when AI-driven technology makes financial decisions.

The IFC is not alone in these efforts. In Egypt, where two-thirds of adults don't have a bank account, Cairo-based Commercial International Bank developed predictive analytics software that uses non-traditional data-home address, employment status, and run-ins with the law-to gauge a borrower's ability to repay loans. In 2017, the State Bank of India developed its own AI-powered platform that enables underbanked people to get approved for a loan nearly instantly.

A question of AI fairness

For all the benefits of AI, experts are concerned about the potential bias and fairness issues that arise when AI-driven technology makes financial decisions. During the design and training of AI and ML models, humans select what data is used to train them, so biases can make their way into the algorithms via conscious or unconscious bias in the training data itself. For example, past hiring for IT jobs skewed toward male applicants, and AI trained on such data may disproportionately choose men for future openings over women.

Problems can arise even when algorithms are intentionally made blind to race or gender. In 2019, Apple partnered with Goldman Sachs to launch the Apple credit card. But Apple was forced to investigate its algorithms after it was revealed that the Goldman Sachs credit approval system discriminated against female applicants even though applicants were not identified by sex. Similarly, Amazon made headlines in 2016 when outside researchers showed that its algorithms were systematically excluding Black neighborhoods from same-day delivery service-even though the system was designed to deliberately ignore race.

What Real Customer Services Leaders Do When Life Goes Off Script

Likewise, researchers have shown that ML models meant to help historically disadvantaged applicants by broadening credit-scoring criteria often wind up perpetuating discrimination. For example, AI-driven credit-scoring is contributing to a $17 billion credit gap between men and women, according to a study from Women's World Banking.

Addressing concerns

One of the earliest attempts to deal with the issue of AI fairness in finance came in 2018 from Singapore, where the country's finance ministry convened international industry partners to address these concerns. Working with an array of global analysts and banks, they developed the FEAT Fairness Assessment Methodology to help financial services providers create fairer, more ethical AI use cases, according to Grace Abuhamad, the research lead for ServiceNow's AI Trust and Governance Lab. "Singapore helped kick-start a global conversation about trustworthiness in finance," she says.

FEAT treats fairness as a contested concept for which no universally accepted definition exists. According to the FEAT framework, financial institutions can't mitigate bias by pretending race and gender don't exist. Instead they should come up with their own definition of fairness, define what groups might be affected by any specific financial decision, and articulate how those groups might be harmed. To periodically assess the fairness of AI-powered business models, institutions should use independent auditors.

Singapore helped kick-start a global conversation about trustworthiness in finance

Other organizations have created governance frameworks for AI. In 2019, the European Union created its Ethics Guidelines for Trustworthy AI. The guidelines acknowledge that AI can have a negative impact on children, people with disabilities, and other groups that have been historically disadvantaged. It also emphasizes continuous auditing and oversight.

In 2018, Microsoft launched a research and advocacy program focused on fairness in AI. Like Singapore's FEAT, Microsoft's "AI Fairness Checklist" asserts that no single definition of fairness exists and that the goal of any AI-powered system should be to minimize harm. Last year, the U.S. Federal Trade Commission put out its own statement calling for "truth, fairness, and equity" in the use of AI in financial decision-making. The statement urged financial institutions to tell the truth about their data, aim for transparency, and to "do more good than harm" or the FTC will challenge the use of that model as unfair.

Concerns over bias and fairness of AI systems must be addressed, says ServiceNow's Abuhamad, but not by focusing on universal notions of fairness. "Instead," she says, "be fully transparent about what your vision of fairness is and how you've evaluated your algorithms based on that vision."

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

ServiceNow Inc. published this content on 09 March 2022 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 11 March 2022 21:38:07 UTC.