- A shooter livestreamed video of an attack on a mosque in New Zealand on Friday. It was one of two mosque attacks that day that killed at least 49 people.
- Facebook, Twitter and YouTube said they removed the original videos, but hours later people reported finding versions on the platforms.
- These companies have previously taken steps to prioritize authoritative information in the wake of breaking news events and prevent harmful content from being streamed live.
The shooter in at least one of the two mosque attacks in New Zealand on Friday used social media to stream his deadly rampage live.
Shortly after, tech giants scrambled to remove his accounts, but versions of the video remained on some sites hours after the shootings, which killed at least 49 people.
Facebook, Twitter and Google’s YouTube all said they removed the original video following the attack. But hours later, people still reported online that they were able to find versions of the video on the platforms.
Twitter removed the original video and suspended the account that posted it, but is still working to remove copies that have been posted from other accounts. Twitter said that both the account and video violated its policies.
“We are deeply saddened by the shootings in Christchurch today,” a Twitter spokesperson said in a statement. “Twitter has rigorous processes and a dedicated team in place for managing exigent and emergency situations such as this. We also cooperate with law enforcement to facilitate their investigations as required.”
Facebook also removed the stream and has also been working to remove content praising the attack.
“Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video,” said Mia Garlick of Facebook’s New Zealand office. “We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware. We will continue working directly with New Zealand police as their response and investigation continues.”
Facebook has previously experienced abuse of its livestream function and has taken steps to detect problematic streams in real time. In 2017, the company added additional measures to detect live videos where people express thoughts of suicide, including using artificial intelligence to streamline reporting, and adding live chat with crisis support organizations. These policies followed a series of suicides that were reportedly livestreamed on Facebook’s platform.
Several people tweeted that they have been able to find re-postings of videos of the attack on Youtube more than 12 hours after it happened, even though YouTube said it took down the original video, which violated its policies. A straightforward search on YouTube will generally yield legitimate reports from new organizations, but graphic videos could still be easily found if a user filtered results by upload date.
What responsibility do we want these companies to have? On Reddit, one of the most popular sites on the Internet, people have been narrating the video on a forum called "watchpeopledie." After more than an hour, this was posted: pic.twitter.com/C8nmt7CZgh
— Drew Harwell (@drewharwell) March 15, 2019
These companies all like to tout their ability to fingerprint videos and images to remove them quickly from the web. Are they doing that tonight? Are they actively looking for hashtags promoting the video and their variations?
— Sheera Frenkel (@sheeraf) March 15, 2019
Tried again, more videos. pic.twitter.com/45eXO7Wlrj
— Jason Abbruzzese (@JasonAbbruzzese) March 15, 2019
YouTube has taken steps to ensure legitimate news reports are prioritized when searching for a trending event, rather than other videos that have the potential for spreading misinformation. In July, YouTube said in a blog post that its Top News section would highlight videos from news organizations and it would link to news articles immediately in the wake of a breaking news event.
Those moves can prevent videos from bubbling up at the top of search results or appearing in YouTube’s trending section, but that doesn’t necessarily stop them from being uploaded to the site.
A YouTube spokesperson said in a statement, “Shocking, violent and graphic content has no place on our platforms, and is removed as soon as we become aware of it. As with any major tragedy, we will work cooperatively with the authorities.”
The video also appeared in a Reddit forum dedicated to violent videos, where users discussed and commented on the images. The forum is protected by a warning of disturbing content which visitors must acknowledge before viewing the page. Reddit removed the video and similar links at the request of New Zealand police, according to a Redditor who first posted the video. But users who found the video elsewhere online claimed to have downloaded copies and were offering to share the files in direct messages.
“We are actively monitoring the situation in Christchurch, New Zealand,” a Reddit spokesperson said. “Any content containing links to the video stream are being removed in accordance with our site-wide policy.”