A French Facebook user last year criticized the lack of a health strategy in France and questioned what society had to lose by allowing doctors to prescribe in an emergency a "harmless drug," such as hydroxychloroquine, according to the Oversight Board.
The FDA has warned against the use of the drug in relation to the coronavirus.
Facebook took down the post, saying it contributed to the risk of "imminent physical harm" on the platform, part of its "violence and incitement" Community Standard.
In its ruling, the Oversight Board disagreed, saying "a patchwork of policies found on different parts of Facebook's website make it difficult for users to understand what content is prohibited." The board advised Facebook to "explain what factors, including evidence-based criteria, the platform will use in selecting the least intrusive option when enforcing its Community Standards to protect public health."
As part of its agreement when it established the board, Facebook must restore a piece of content after an Oversight Board decision, but it isn't required to follow the Board's other recommendations. In the French case, it said in a February blog post that it is committed to enforcing all of the Oversight Board's suggestions.
The Oversight Board disagreed when Facebook took down a post in which a user wrongly attributed a quote to the Nazi propaganda minister, Joseph Goebbels, saying it violated its Community Standard on "dangerous individuals and organizations."
Facebook told the Oversight Board that Goebbels is on an internal list of dangerous individuals. Facebook should make that list public, the board said. Facebook is assessing the feasibility of making the list public, the company said in its February post.
When artist Sunny Chapman saw a photo in a Facebook group of a selfie of a Muslim woman in front of an anti-Muslim protest last month, a well-known photo that was circulating, Ms. Chapman of Hancock, N.Y., wanted to join the conversation. In a reply to a comment disparaging Muslims, she wrote that she had traveled in Morocco by herself and "felt much safer there than I do here in the USA with all these crazy white men going around shooting people."
Facebook took down her comment, and informed her she had been restricted from posting or commenting for 30 days because the post violated the Community Standards on hate speech.
Ms. Chapman says she was confused, because another comment posted after hers voicing a similar perspective had been left up. That post read: "Most acts of violence in this country are committed by white men. Usually christian, often white supremacists, and almost always white men, " according to screenshots of the posts viewed by the Journal.
Ms. Chapman had earlier received a 30-day ban for calling two other users who were degrading Vice President Kamala Harris racist. Even though she reported the comments made by the other users, they weren't taken down.
"What I'm learning about Facebook is not to talk on Facebook," Ms. Chapman says.
Facebook reinstated Ms. Chapman's account after the Journal shared her example.
Recently, Facebook expanded the Oversight Board's scope to include decisions on user requests to remove content.
In recent years, Facebook has been relying more heavily on its artificial intelligence to flag problem content, according to people familiar with the company. In May 2020, the company touted its use of AI to take down content related to the coronavirus pandemic.
Facebook took down 6.3 million pieces of content under the "bullying and harassment" category during the fourth quarter of 2020, up from 3.5 million in the third quarter, in part because of "increasing our automation abilities," the company said in its quarterly Community Standards Enforcement report.
Users appealed about 443,000 pieces of content in the category, and Facebook restored about a third of it, the company said. Other categories saw fewer pieces of content removed compared with the previous quarter, and content action can be affected by many outside factors, such as viral posts that raise numbers.
Facebook increasingly polices content in ways that aren't disclosed to users, in hopes of avoiding disputes over its decisions, according to current and former employees. The algorithms bury questionable posts, showing them to fewer users, quietly restricting the reach of those suspected of misbehavior rather than taking down the content or locking them out of the platform entirely.
Facebook has acknowledged the practice in some cases. To protect state elections in India, it said in a March blog post that it would "significantly reduce the distribution of content that our proactive detection technology identifies as likely hate speech or violence and incitement."
The use of automation for moderation at scale is "too blunt and insufficiently particularized," says Mr. Sylvain of Fordham. "AI and automated decision-making right now are just not good enough to do the hard work of sorting really culturally specific posts or ads."
Users say they have had content taken down from months or years earlier with no explanation, in what one user called "Facebook's robot gone wrong."
The Facebook spokeswoman says one reason for this could be a "banking system" technology employed internally that keeps track of problematic content and removes it when it is posted again.
The Oversight Board has advised Facebook to let users know when automated enforcement is being used to moderate content, and let them appeal those decisions to a human being, in certain cases. Facebook said in its blog post that it is assessing the feasibility of those recommendations.
Since the start of the coronavirus pandemic last year, Facebook users haven't had an opportunity to appeal bans at all, and instead are given the option to "disagree" with a decision, without further review, although Facebook did look at some of those cases and restore content. Facebook says that is because it has lacked the human moderators to review cases. In a late April decision, the Oversight Board urged Facebook to "prioritize returning this capacity."
Tanya Buxton, a tattoo artist in Cheltenham, England, has tried to appeal multiple restrictions on her Facebook accounts showcasing areola tattoos -- tattoos that are made to look like nipples for women who have had mastectomies.
How much of a breast, or nipple, can be shown on Facebook has been a particularly fraught issue.
In one of its internal documents elaborating on the rules, it tackles sensitive subjects ranging from what constitutes "near nudity" or sexual arousal.
Facebook users should be allowed to show breastfeeding photos, the company wrote in a document to moderators, but warned: "Mistakes in this area are sensitive. Breastfeeding activists or 'lactivists' are vocal in the media because people harass them in public. Any removals of this content make us exposed to suggestions of censorship."
While the public Community Standards are vague about Ms. Buxton's tattoos, Facebook's internal guidelines address the issue and say they should be allowed. But, the company acknowledged in its guidelines to moderators, "It can be really hard to make an accurate decision here."
Ms. Buxton, who says she isn't aware of the internal guidelines, has appealed each time she has been banned.
Facebook says it mistakenly removed two pieces of content from Ms. Buxton's page that it restored after questions from the Journal.
Last year, after an appeal, Facebook sent her an automated note, saying that because of the coronavirus pandemic, "we have fewer people available to review content."
"As we can't review your post again, we can't change our decision," Facebook wrote.
--Jeff Horwitz contributed to this article.
Write to Kirsten Grind at firstname.lastname@example.org
(END) Dow Jones Newswires