In the midst of a global pandemic, upcoming elections and increasing racial tensions, we're seeing a shift in the way people are using Instagram. More than ever, people are turning to the platform to raise awareness for the racial, civic and social causes they care about. It's a big part of why wecommitted in Juneto review the ways Instagram could be underserving certain groups of people. We have a responsibility to look at what we build and how we build, so that people's experiences with our products better mirror the actions and aspirations of our community.

Below is an update on areas where we've made progress this summer. This is by no means comprehensive, and we have a lot more to do, but I'm going to share regular updates so our community knows that this work is important and ongoing.

New Equity Team

To ensure this work is fully supported, we've created a dedicated product group - the Instagram Equity team - that will focus on better understanding and addressing bias in our product development and people's experiences on Instagram. The Equity team will focus on creating fair and equitable products. This includes working with our Responsible AI team to ensure algorithmic fairness. In addition, they'll create new features that respond to the needs of underserved communities. Separate from this new product group, we're alsohiring a new Director of Diversity and Inclusionfor Instagram who will help advance Instagram's goal of finding, keeping and growing more diverse talent.

Harassment and Hate

We've developed and updated a number of our company policies to support communities worldwide. Weupdated our policiesto more specifically account for certain kinds of implicit hate speech, such as content depicting blackface, or stereotypes about Jewish people. We also strengthened enforcements against people who make serious rape threats, and we'll now disable any account that makes these threats as soon as we become aware of them, rather than just removing the content. In addition, we'll ensure involuntary public figures - people who may not have sought attention and who we've seen are often members of marginalized communities - are protected from harassment and bullying just as they were before finding themselves in the public eye.

We've continued to prioritize the removal of content that violates ourpolicy against hate groups. This includesremoving 23 different banned organizations, over half of which supported white supremacy. In addition, we recently announced updates totake action on organizations tied to violence,such as QAnon.

We've also made some changes for creators and businesses. For example, people with Business and Creator accounts can now manage who can send them direct messages. And we've begun expanding comment warnings to include comments in Live, so people will be asked to reconsider comments that might be offensive before they're posted.

Verification

We spent the past two months reviewing Instagram's verification practices and have started making changes to ensure a fairer process. An account must meet certain criteria before we verify it, including a degree of notability. We measure notability through press articles about the person applying for verification. We've now expanded our list of press sources we consider in the process to include more Black, LGBTQ+, and Latinx media.

While follower count was never a requirement to get verified through the in-app form (which anyone can apply for), we did have certain systems in place that prioritized accounts with high followings to help get through the tens of thousands of requests received every day. We've since removed this from the automated part of the process.

Distribution

In response to ongoing concerns around perceived censorship on Instagram,we recently publishedthe guidelines we use to determine the types of content that can appear in places like Explore. Our hope is that people will better understand why some types of content aren't included in recommendations across Instagram and Facebook, and therefore may not be distributed as widely. We consulted over 50 leading experts specializing in recommendation systems, social computing, freedom of expression, safety, civil and digital rights in developing these guidelines.

Attachments

  • Original document
  • Permalink

Disclaimer

Facebook Inc. published this content on 09 September 2020 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 09 September 2020 16:04:07 UTC