In December 2023, the Federal Trade Commission (FTC) announced a settlement with Rite Aid for the company's use of facial recognition technology (FRT) in connection with its surveillance technologies for theft deterrence purposes. In this groundbreaking settlement, the FTC took its first enforcement action against a company for "algorithmic unfairness" - i.e., using artificial intelligence (AI) in an allegedly discriminatory manner - and sent a strong warning to companies using algorithms for decision making to carefully evaluate their algorithmic determinations. In fact, FTC Commissioner Alvaro Bedoya stated that the Rite Aid order "is a baseline for what a comprehensive algorithmic fairness program should look like."

In addition to the warning, the settlement provides a roadmap for companies to assess and mitigate potential bias in their use of AI and other automated decision-making systems. It also continues the FTC's recent trend of requiring algorithmic and data disgorgement - and provides a preview of the other types of obligations the FTC may seek to impose in these situations.

The consent order imposes stringent obligations on Rite Aid, including:

    Prohibition on FRT: Rite Aid is banned from using FRT for five years.
  • Algorithmic and data disgorgement: Rite Aid must delete, and direct third parties to delete, any images or photos they collected through Rite Aid's FRT, as well as any data, models, or algorithms or other products developed using those images and photos.
  • AI bias monitoring program and risk assessment: Rite Aid must implement a comprehensive monitoring program that identifies and addresses risks associated with what the FTC views as algorithmic bias and associated harms that the FTC believes will disproportionately affect consumers, including based on their race, ethnicity, gender, sex, age, or disability.
  • Vendor monitoring: Rite Aid must conduct periodic assessments for algorithmic bias of its vendors that handle personal information.
  • Addressing of consumer complaints for algorithmic determinations: Rite Aid must investigate and respond to consumer complaints about actions taken by Rite Aid against consumers related to algorithmic determinations.
  • Customer notifications: Rite Aid must inform consumers when they are enrolled in a FRT system, how they may contest their entry into that system, when it takes an action against them based on an output generated by an algorithmic determination that could hurt them, and how they may contest those actions.
  • This is not the first time the FTC has expressed its view that AI tools can be inaccurate, biased or discriminatory by design - this view has been highlighted several times in the FTC's blog posts in recent years. In addition, with the proliferation of the use of AI and other automated systems to streamline corporate decision-making, other regulators have been focusing on these technologies - and how to mitigate any potential that they may perpetuate any unlawful bias or automate unlawful discrimination. For example, New York City Local Law 144 mandates bias audits and notice requirements for employers or employment agencies that use automated employment decision-making tools. Similarly, the California Consumer Privacy Act draft regulations and the Colorado Privacy Act both require assessments that include bias audits for automated decision-making technology. And most recently, a group of US senators asked the Department of Justice to investigate whether its funding of FRT may potentially violate the Civil Rights Act, given their concerns that such technology may reinforce racial bias in the criminal justice system.

    In light of this FTC settlement and regulators' focus on this area, companies that use automated decision-making technologies should ensure that they have a framework in place to monitor, detect and mitigate the effects of algorithmic bias. In creating such a framework, companies should consider:

      Creating an inventory of all algorithms currently in use.
    • Screening for bias by, for example, articulating the ideal target against the actual target.
    • Retraining any biased algorithms or, alternatively, discontinuing or suspending their use.
    • Conducting third-party assessments of information security programs.
    • Training employees appropriately on the risks of algorithmic bias.
    • Monitoring and contractually requiring service providers that handle personal information to maintain safeguards to address algorithmic bias.
    • With the increased risk of regulatory scrutiny, companies should act now to proactively mitigate the risk of enforcement.

      The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Ms Lei Shen
Cooley LLP
55 Hudson Yards
New York, NY
New York
10001-2163
UNITED STATES
E-mail: zthoughtleadership@cooley.com
URL: www.cooley.com

© Mondaq Ltd, 2024 - Tel. +44 (0)20 8544 8300 - http://www.mondaq.com, source Business Briefing