Skip to main content
Post

Online Violent Content After Five Years of the Christchurch Call

AEIdeas

March 8, 2024

As the five-year anniversary approaches of the March 15 live streamed massacre of 51 people (with 50 more injured) by a lone shooter at two Christchurch, New Zealand, mosques, it is apposite to consider the activities and effectiveness of the Christchurch Call, the institution established in the wake of the attack under the auspices of then–New Zealand Prime Minister Jacinda Ardern (now special New Zealand envoy to the Christchurch Call) and French President Emmanuel Macron to “eliminate terrorist and violent extremist content online.”

The Christchurch Call is a “community of over 130 governments, online service providers, and civil society organisations acting together to eliminate terrorist and violent extremist content online.” Its supporters agreed to 25 commitments aimed at delivering content “transparently and in a way that respects and promotes human rights and a free, open, secure internet.”  These commitments range from developing and applying laws (for governments) to specific technical measures (for platform operators) and supporting and undertaking research “to better understand, prevent and counter terrorist and violent extremist content online, including both the offline and online impacts of this activity.” More detailed explanations of the commitments in a past blog are here; my assessment of the three-year performance is here

Much water has flowed under the bridge since the inception of the Christchurch Call in 2019, as illustrated by its extension into the fields of algorithm research and artificial intelligence. The most recent Leaders Summit report reveals a creep in scope to include “protecting and promoting human rights online and a free, open, secure global internet as a force for good and as a digital platform for innovation and social progress.” However, it also pays specific attention to online content relating to the conflict in Gaza and Israel, inter alia considering and incorporating “approaches for de-escalating tension and preventing on- and offline hate and violence, including strategic communications and positive interventions.”

But while, as I noted in 2019, the call is largely concerned with aspirational goals, at least from the government perspective. What has it achieved “on the ground”?

The report prepared for the new New Zealand Prime Minister last year contains much evidence of multi-stakeholder meetings having occurred. It also claims “substantial positive progress in eliminating terrorist and violent extremist content (TVEC) from social media platforms, leading to improved crisis response systems and fostering a global effort supported by dedicated structures to stay ahead of terrorist and online extremist threats.” The group is also working with its stakeholder communities to “manage the risks and realise the positive potential of artificial intelligence.” 

Yet for the most part, the real gains in content monitoring practices and the development of tools to manage this task appear to be managed largely by the platforms themselves. For example, a tool to help smaller online platforms to protect their users and promoted by the Christchurch Call was actually developed by Google. Likewise, a Transparency Initiatives Portal, similarly promoted, was launched by a firm that does not appear listed amongst the Call’s Network Members, Partners or Supporters. Would these have been developed and released without the existence of the Christchurch Call? It seems highly likely.

Moreover, while individual platforms may have made improvements in their algorithms to detect content, it is not clear that this has actually achieved the aspiration to reduce terrorism in the real world, or provided better management on the availability of shocking content online. Just this past month, a Gaza protestor objecting to US support for Israel posted a link on Facebook to a live streamed video on Twitch of his self-immolation in front of the Israeli Embassy in Washington, DC. The content of the video is truly shocking, yet it (and derivatives of it) still remain freely available online (albeit with trigger warnings that the content is graphic) Reportedly, so too is the Facebook post. Ironically, this attack was arguably a copycat of another self-immolation in December 2023, for the same purpose (though it is not known if that event was live streamed). One of the main reasons to remove the Christchurch live stream and associated content was supposedly to reduce the likelihood of generating copycat incidents by vulnerable viewers. 

If the objectives of the Christchurch Call are taken at face value, then it seems remarkably unsuccessful in preventing sharing of this violent content relating to its new priority area. On the other hand, it may substantiate claims of bias in content moderation— something that the Call set itself up to address. 

See also: Hate Speech Revisited in the Home of the Christchurch Call | W(h)ither the Christchurch Call? | Should Internet Platforms Be Classified as Common Carriers? | Can the FCC’s Open Internet Order Really Increase Consumer Safety?