After the Christchurch, New Zealand, mosque shooting in 2019, Facebook was widely criticized for allowing the shooter to livestream his killings for 17 minutes uninterrupted. Saturday’s racially motivated made-for-the-internet mass shooting in Buffalo, New York, went differently.
This time, the shooter shared his appalling acts on Twitch, a livestreaming video app popular with gamers, where it was shut down much more quickly, less than two minutes after the violence began, according to the company. When Twitch cut off the stream, it reportedly had just 22 views.
That didn’t stop people from spreading screen recordings of the Twitch livestream — and the shooter’s writings — all over the internet, where they racked up millions of views, some of which came via links shared widely on Facebook and Twitter.
“It’s a tragedy because you only need one copy of the video for this thing to live forever online and endlessly multiply,” said Emerson Brooking, a resident senior fellow at the Atlantic Council think tank who studies social media.
It shows that, while major social media platforms like Facebook and Twitter have, since Christchurch, gotten better at slowing the spread of gruesome depictions of mass violence, they still can’t stop it entirely. Twitch was able to quickly cut off the shooter’s real-time video feed because it’s an app that’s designed for sharing a specific kind of content: first-person live gaming videos. Facebook, Twitter, and YouTube have a much wider pool of users, posting a much broader range of posts, which are shared via algorithms designed to promote virality. For Facebook and Twitter to stop the spread of all traces of this video would mean that these companies would have to fundamentally alter how information is shared on their apps.
The unfettered spread of murder videos on the internet is an important problem to solve. For the victims and victims’ families, these videos deprive people of their dignity in their final moments. But they also incentivize the fame-seeking behavior of would-be mass murderers, who plan horrific violence that aims for social media virality that promotes their hateful ideologies.
Over the years, major social media platforms have gotten much better at slowing and restraining the spread of these types of videos. But they haven’t been able to fully stop it, and likely never will.
The effort of these companies so far has been focused on better identifying violent videos, and then blocking users from sharing that same video, or edited versions. In the case of the Buffalo shooting, YouTube said it has taken down at least 400 different versions of the shooter’s video that people have tried to upload since Saturday afternoon. Facebook is similarly blocking people from uploading different versions of the video, but wouldn’t disclose how many. Twitter also said it is removing instances of the video.
These companies also help each other identify and block or take down this type of content by comparing notes. They now share “hashes” — or digital fingerprints of an image or video — through the Global Internet Forum to Counter Terrorism, or GIFCT, an industry consortium founded in 2017. When these companies exchange hashes, it gives them the ability to find and take down violent videos. It’s the same way platforms like YouTube search for videos that violate copyright.
After the Christchurch shooting in 2019, GIFCT created a new all-hands-on-deck alert system, called a “content incident protocol,” to start sharing hashes in the case of an emergency situation like a mass shooting. In the case of the Buffalo shooting, a content incident protocol was activated at 4:52 pm ET Saturday, about two and a half hours after the shooting began. And as people who wanted to spread the distribution of the videos tried to alter the clips to foil the hash-trackers — by, say, adding banners or zooming in on parts of the clips — companies in the consortium tried to respond by creating new hashes that could flag the altered videos.
But hashing videos only goes so far. One of the key ways the Buffalo shooter video spread on mainstream social media was not by people posting the video directly, but by linking to other websites.
In one example, a link to the shooter’s video hosted on Streamable, a lesser-known video site, was shared hundreds of times on Facebook and Twitter in the hours after the shooting. That link gained over 43,000 interactions, including likes and shares, on Facebook, and it was viewed more than 3 million times before Streamable removed it, according to the New York Times.
A spokesperson for Streamable’s parent company, Hopin, did not answer Recode’s repeated questions about why the platform didn’t take down the shooter’s video sooner. The company did send a statement saying that these types of videos violate the company’s community guidelines and terms of service, and that the company works “diligently to remove them expeditiously as well as terminate accounts of those who upload them.“ Streamable is not a member of GIFCT.
In a widely circulated screenshot, a user showed that they had reported a post with the Streamable link and an image from the shooting to Facebook soon after it was posted, only to get a response from Facebook that said the post didn’t violate its rules. A spokesperson for Meta confirmed to Recode that posts with the Streamable link did indeed violate its policies. Meta said that the reply to the user who reported the link was made in error, and the company is looking into why.
Ultimately, because of how all of these platforms are designed, this is a game of whack-a-mole. Facebook, Twitter, and YouTube have billions of users, and within those billions, there will always be a percentage of users who find loopholes to exploit these systems. Several social media researchers have suggested the major platforms could do more by better examining fringe websites like 4chan and 8chan, where links were originating, in order to identify and block them early. Researchers have also called for these platforms to invest more in their systems for receiving user reports.
Meanwhile, some lawmakers have blamed social media companies for allowing the video to go up in the first place.
“[T]here’s a feeding frenzy on social media platforms where hate festers more hate, that has to stop,” New York Gov. Kathy Hochul said at a news conference on Sunday. “These outlets must be more vigilant in monitoring social media content, and certainly the fact that this could be livestreamed on social media platforms and not taken down within a second says to me that there is a responsibility out there.”
Catching and blocking content that quickly hasn’t yet proved feasible. Again, it took Twitch two minutes to take down the livestream, and that amounts to one of the fastest response times we’ve seen so far from a social media platform that lets people post in real time. But those two minutes were more than enough time to allow links to the video to go viral on larger platforms like Facebook and Twitter. The question, then, is less about how quickly these videos can be taken down and more about whether there’s a way to prevent the afterlife they can gain on major social media networks.
That’s where the fundamental design of these platforms butts up against reality. They are machines designed for mass engagement and ripe for exploitation. If and when that will change depends on whether these companies are willing to throw a wrench in that machine. So far, that doesn’t look likely.
Peter Kafka contributed reporting to this article.