If there was ever any doubt that US politics has conditioned the content carried by social media delivered worldwide, it was erased on January 7 when Meta announced it was scrapping its fact-checking on Facebook, Instagram, and Threads in favor of a community notes system similar to that used on the platform X. In a video accompanying the announcement, Meta CEO Mark Zuckerberg stated that the strategic volte-face is due to the recent election’s feeling “like a cultural tipping point towards once again prioritizing speech.” Moreover, Zuckerberg noted it was “time to get back to our roots around free expression” as the firm’s past strong advocacy for and practice of content moderation had led to “just too many mistakes and too much censorship.” Meta will also reduce curbing discussions around contentious topics such as immigration and gender identity.
Some have viewed Meta’s decision as a cynical political move to please President Trump and his allies. However, that presupposes Meta’s previous support of active fact-checking and content moderation was also a cynical political choice to please the previous administration. An alternative view is that Meta is returning to its origins as a facilitator of interactive communications between members of communities. By returning the responsibility for checking the veracity and provenance of content carried, Meta has signaled that its platforms should reflect society’s views in all their diversity and inevitable controversy, rather than seeking to shape society by governing communications in accordance with specific politically sanctioned preferences.
Meta’s decision should be seen as a triumph for democracy, which thrives when a plurality of views can be discussed and debated freely. Free speech and democracy have stood hand in hand in the US since the founding. However, others have warned that it presages a tsunami of new online misinformation with a more rapid rate of spread. These same commentators also identify the economic consequences for the 90 accredited fact-checking organizations around the world receiving funding from Meta.
The very concept of misinformation is loaded with potential for bias and capture by specific interest groups—not least those with political objectives. Fact-checkers report on the “accuracy of statements by public figures, major institutions, and other widely circulated claims of interest to society.” While they can be good at verifying simple, straightforward facts (e.g. Did person X actually say Y?), they are not experts in much of the complex, and often scientific and technical, subject matter drawn to their attention. Neither are they able to distinguish how a specific piece of information may be considered fact or misinformation differently depending on the identity of the persons creating or consuming it. What one person perceives as an incontrovertible truth can be equally validly perceived by another to be blatant misinformation. Such information becomes disinformation in the naysayer’s view it was shared with the specific intention to deceive. Fact checkers may be able to detect and warn where such conflicts exist, but they are unable to shed any light on the motives of those doing the sharing.
Take, for example, a simple statement relevant in New Zealand: The Treaty of Waitangi is the country’s founding document. Legally speaking, this is incorrect, as the treaty has no formal constitutional status and was not ratified by the New Zealand Parliament. Technically, it is just a contract. However, it paved the way for cooperation between the indigenous and settler peoples from which the colony and later state of New Zealand emerged. The statement is therefore true in one sense and false in another. Unsurprisingly, in a political context in which the question of the state’s honoring of promises made to indigenous people in the Treaty has come into the spotlight, both views have been hijacked for political purposes, leading to naysayers being accused of spreading if not disinformation, then at least hateful views fueling conflict. For instance, a prominent former political leader questioning the treaty’s status was deplatformed by a university because of his views.
Truth, therefore, like beauty, lies largely in the eyes—or mind—of the beholder. As there is no single “truth” for most complex issues, it is not clear what benefit fact-checking actually delivers, except to identify that conflict exists. In the spirt of free speech, allowing holders of different views the ability to debate them without censorship and take responsibility themselves for identifying the basis for or arguments against them—that is, community monitoring—appears to be the only viable course of action for social media.
As for the effectiveness of community monitoring, it is hard to argue against it in the face of the success of initiatives such as Wikipedia.