Log in
Forgot password ?
Become a member for free
Sign up
Sign up
Dynamic quotes 

MarketScreener Homepage  >  Equities  >  Nasdaq  >  Facebook    FB


My previous session
Most popular
News SummaryMost relevantAll newsOfficial PublicationsSector newsMarketScreener StrategiesAnalyst Recommendations

Facebook, YouTube, Twitter Scramble to Remove Video of New Zealand Mosque Shooting -- Update

share with twitter share with LinkedIn share with facebook
share via e-mail
03/15/2019 | 07:22pm EDT

By Jon Emont, Georgia Wells and Mike Cherney

Scenes of Friday's New Zealand mosque massacre were streamed live on Facebook and posted on YouTube and Twitter, a gruesome example of how social-media platforms can be used to spread terror despite heavy spending by their owners to contain it.

New Zealand police said the footage of the attack on a pair of mosques, which left 49 dead, was "extremely distressing" and urged people not to circulate it. Yet the video was widely available online Friday as the tech platforms scrambled to pull down the offending posts only to have them reappear elsewhere.

The 17-minute video shows a gunman walking through a mosque and firing at worshipers who slump to the floor. At one point the man, whose face is visible in parts of the video, appears to gun down a victim at close range before reloading and continuing the rampage.

A Facebook Inc. spokeswoman said the company removed the video after New Zealand police flagged it, and deleted the Facebook and Instagram accounts belonging to the alleged shooter, Brenton Tarrant, who has been charged with murder.

Twitter Inc. said it had suspended Mr. Tarrant's account and was working to remove the video from the platform. A spokesman for YouTube, a unit of Alphabet Inc.'s Google, said it has removed thousands of videos related to the incident and that "shocking, violent and graphic content has no place on our platforms."

All three platforms have struggled to block, uncover and remove violent content despite a public outcry and political pressure. They have invested heavily in artificial-intelligence systems designed to detect violence, and have hired thousands of moderators to review content flagged by users.

But the sheer volume of material posted by the platforms' billions of users, along with the difficulty in evaluating which videos cross the line, has created a minefield for the companies.

In addition, even once the mainstream platforms take action, disturbing or offensive content often lives on in darker corners of the web. Late Friday, for example, the video of the shooting was widely available for streaming or download on sites including 4Chan and Gab, popular among right-wing extremists and free-speech absolutists.

"This latest atrocity only underscores the fact that there is no responsible way to offer a live-streaming social media service," said Mary Anne Franks, a law professor at the University of Miami and president of the Cyber Civil Rights Initiative, which advocates for legislation to address online abuse.

After the 2016 launch of video service Facebook Live, dozens of violent acts were broadcast in real time, including a 2017 murder of a Cleveland man. At the time, Facebook acknowledged its process for reviewing content contained flaws and it pledged to improve it.

On the flip side, Facebook Live in 2016 broadcast the Minnesota shooting of Philando Castile, who died after a confrontation with a police officer during a traffic stop, an example that many say shows the potential benefits live-streaming can provide.

"I think livestreaming is, on balance, good for the world -- the ability to livestream police violence, as in the case of Philando Castile, has been extremely powerful in holding authorities accountable," said Ethan Zuckerman, director for the Center for Civic Media at the Massachusetts Institute of Technology. "The issue is responding to reports of violent livestreams in time, and hardening platforms against redistribution of this content."

Jennifer Grygiel, assistant professor of communications at Syracuse University, suggested YouTube put a hold on videos that include pertinent keywords during a tragedy, and moderate these videos before posting them. "This is content that violates their community standards, so I'm not asking them to do anything beyond what they have said they would do," Prof. Grygiel said.

YouTube didn't respond to a request for comment about the idea of a delay.

Facebook says it has more than 15,000 contractors and employees reviewing content, part of a 30,000-person department working on safety and security issues at the company. The department includes engineers building technical tools to block graphic content, as well as employees dubbed "graphic violence specialists" who make decisions about whether violent images posted on the site have social or news value or whether, as in the case of beheadings, they are meant to terrorize and have no place on the site, Monika Bickert, head of global policy management at Facebook, said in an interview in February.

YouTube likewise makes exceptions for violent content that it deems to have documentary or news value.

After the New Zealand shooting, Facebook's content-policy team designated the incident as a terrorist attack, meaning that any praise or support of the event violates the company's rules. Facebook teams have also been deleting the accounts of people who impersonate the shooter or allege the incident didn't happen, a spokeswoman said.

After the live video was removed, Facebook set up a filter to detect and delete any similar videos, and is using artificial intelligence to find videos that aren't an exact match but also depict the shooting.

"We are adding each video we find to an internal database which enables us to detect and automatically remove copies of the videos when uploaded again," the spokeswoman said. "We urge people to report all instances to us so our systems can block the video from being shared again."

She said Facebook notifies other sites when it detects links to the video hosted elsewhere so those platforms can delete them.

Artificial-intelligence experts said no technology is available that would allow for the foolproof detection of violence on streaming platforms. Even teaching machines to recognize a person brandishing a gun is difficult, as there are many different types of guns, and many different stances for holding them. Computers also struggle to distinguish real violence from fictional films.

"There's a perception that AI can do everything and detect everything, but it's a matter of how much room do you leave to produce false alerts, " said Itsik Kattan, the CEO of Agent Video Intelligence, a video-analytics company specializing in AI applications for surveillance.

Taking down violent videos often doesn't stop their spread. Sidney Jones, director of the Institute for Policy Analysis of Conflict in Jakarta, says that by the time big tech companies remove violent videos, they have often been spread via email and messaging applications and remain accessible. Islamic State live-streamed terrorist attacks to gain followers and attention, she said, and now other violent actors are using social media in a similar way.

"It's the classic objective of terror, which is to sow the idea that you will be next," Ms. Jones said.

--Yoree Koh and Rob Copeland contributed to this article.

Write to Jon Emont at jonathan.emont@wsj.com, Georgia Wells at Georgia.Wells@wsj.com and Mike Cherney at mike.cherney@wsj.com

Stocks mentioned in the article
ChangeLast1st jan.
ALPHABET 1.21% 1202.99 Delayed Quote.13.71%
FACEBOOK 0.72% 161.63 Delayed Quote.22.41%
TWITTER 0.61% 31.27 Delayed Quote.8.14%
share with twitter share with LinkedIn share with facebook
share via e-mail
Latest news on FACEBOOK
08:00pHouse Panel Seeks Answers From Tech CEOs Over Shooting Video -- Update
06:37pHouse Panel Seeks Answers From Tech CEOs Over Shooting Video
02:16pFACEBOOK : Axes Age, Gender and Other Targeting for Some Sensitive Ads
02:05pFACEBOOK : Doing More to Protect Against Discrimination in Housing, Employment a..
12:19pInstagram adds shopping feature for U.S. users
11:07aFACEBOOK : Live Has Got to Go
05:42aFACEBOOK : scrubs 1.5 million New Zealand mosque attack videos but criticism goe..
03/18FACEBOOK : Update on New Zealand
03/18MARKET SNAPSHOT: Stocks Close Higher Ahead Of Fed Meeting; Dow Shakes Off Boe..
03/18FACEBOOK TO NETANYAHU : Stop gathering information on users
More news
Financials ($)
Sales 2019 68 963 M
EBIT 2019 25 410 M
Net income 2019 22 187 M
Finance 2019 49 256 M
Yield 2019 -
P/E ratio 2019 21,36
P/E ratio 2020 18,23
EV / Sales 2019 5,93x
EV / Sales 2020 4,77x
Capitalization 458 B
Duration : Period :
Facebook Technical Analysis Chart | MarketScreener
Full-screen chart
Technical analysis trends FACEBOOK
Short TermMid-TermLong Term
Income Statement Evolution
Mean consensus OUTPERFORM
Number of Analysts 47
Average target price 195 $
Spread / Average Target 22%
EPS Revisions
Mark Elliot Zuckerberg Chairman & Chief Executive Officer
Sheryl Kara Sandberg Chief Operating Officer & Director
David M. Wehner Chief Financial Officer
Michael Todd Schroepfer Chief Technology Officer
Atish Banerjea Chief Information Officer
Sector and Competitors
1st jan.Capitalization (M$)
FACEBOOK22.41%457 980
TWITTER8.14%23 833
MATCH GROUP INC30.21%15 507
LINE CORP3.51%8 408
SINA CORP8.52%4 160
DENA CO LTD-4.46%2 335