By Alexandra Bruell
Facebook, Google and Twitter sought to reassure one of the world's largest ad buyers they were striving to block violent content after video of the New Zealand terrorist attacks ricocheted around social media.
Facebook Inc. told media-agency network GroupM that it has "all hands on deck," said Kieley Taylor, global head of social at GroupM, part of WPP PLC.
Facebook said engineers were seeking and removing a video by the gunman, who used Facebook Live to stream at least part of his deadly assault on two mosques, and permutations posted by others, Ms. Taylor said.
Twitter Inc. said Friday it was working to remove the video. The company told GroupM that it manually reviews videos according to its brand-safety policies before they can be monetized through ads or sponsorships.
The video and a screed by the killer continued circulating, however, even as the companies said they were striving to take them down. The incident again raised the question of whether the largest social networks can control the content on their platforms.
"Since the attack happened, teams from across Facebook have been working around the clock to respond to reports and block content, proactively identify content which violates our standards and to support first responders and law enforcement," Mia Garlick, a spokeswoman for Facebook in New Zealand, said in a statement. "We are adding each video we find to an internal database which enables us to detect and automatically remove copies of the videos when uploaded again."
She urged users to report copies of the video they encounter so Facebook's systems can block them from being shared again.
"Our hearts go out to the victims of this terrible tragedy," a spokesman at YouTube, part of Alphabet Inc.'s Google, said in a statement. "Shocking, violent and graphic content has no place on our platforms, and is removed as soon as we become aware of it."
YouTube has lost some advertisers at least temporarily in recent years over cases of ads appearing near videos with violent content, hate speech or, most recently, comments by pedophiles. Most have returned as YouTube took measures to provide a "brand-safe" environment.
Friday's scramble highlights the need to continually monitor social media, according to Ms. Taylor. GroupM wants Facebook to be more transparent about the steps it takes against inappropriate content on a continuing basis, and the group works closely with Facebook and other social giants to counter such content, she said.
"They're not going to be able to identify every piece of content that is not safe for the platform, but there's definitely an opportunity to better limit the sharing, especially around live content, knowing how graphic and violent it can be," she said.
Write to Alexandra Bruell at firstname.lastname@example.org