• Proactive Detection: Artificial intelligence (AI) has improved to the point that it can detect violations across a wide variety of areas without relying on users to report content to Facebook, often with greater accuracy than reports from users. This helps us detect harmful content and prevent it from being seen by hundreds or thousands of people.
  • Automation: AI has also helped scale the work of our content reviewers. Our AI systems automate decisions for certain areas where content is highly likely to be violating. This helps scale content decisions without sacrificing accuracy so that our reviewers can focus on decisions where more expertise is needed to understand the context and nuances of a particular situation. Automation also makes it easier to take action on identical reports, so our teams don't have to spend time reviewing the same things multiple times. These systems have become even more important during the COVID-19 pandemic with a largely remote content review workforce.
  • Prioritization: Instead of simply looking at reported content in chronological order, our AI prioritizes the most critical content to be reviewed, whether it was reported to us or detected by our proactive systems. This ranking system prioritizes the content that is most harmfulto users based on multiple factors such as virality, severity of harm and likelihood of violation. In an instance where our systems are near-certain that content is breaking our rules, it may remove it. Where there is less certainty it will prioritize the content for teams to review.

Attachments

  • Original document
  • Permalink

Disclaimer

Facebook Inc. published this content on 11 August 2020 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 11 August 2020 16:04:54 UTC