
Take A Deeper Look.
Following X’s (formerly Twitter) use of crowdsourcing, Facebook (Meta Platforms, Inc.) faces backlash for discontinuing its fact-checking initiative amidst concerns about impartial content moderation.
Today, we’ll talk about how fact-checking is changing to crowdsourced moderation, and what that means for brands because of free speech issues and fake news.
Once upon a time in the digital age, the internet became a bustling marketplace of ideas, opinions, life interests, and even conspiracies like whether aliens or mermaids exist or whether a certain general has hidden gold in the Philippine tunnels. Like eager vendors, people are all pitching them in crafted posts to vie for attention.
Soon after, some people peddled counterfeit wares into the mix. Misinformation crept in then fake news became popular trinkets. Conspiracies are now sold as facts. It became tricky to separate the real deal from those out to make a fast buck—at society’s expense. But would you walk away duped?
In response, online communities had adopted content moderation—an inspection of goods. What began as efforts to filter explicit material and spam soon expanded to include more complex issues like political misinformation and election interference. However, as highlighted in our previous article, questions loom over whether certain protective measures—may they be bans, regulations, or moderation—are truly about protecting the public or exerting control over narratives. If so, will freedom of speech weigh over fact-checking more, or is coexisting in the cards?
Yet, the perversion of freedom of speech may win, which can lead to a steep cost to societal and brand trust. While prioritizing freedom of speech is essential, easing content moderation under the guise of protecting free speech ignores the crucial role fact-checking plays in safeguarding institutions, including brands, against politicization, and disinformation.
Why was content moderation, like fact-checking, established despite free speech concerns?
The mid-2010s social media boom (Facebook, Twitter, etc.) unfortunately bred disinformation. CNN reported that disinformation sources originated from fake news factories (e.g., Veles, Macedonia), and Russian bots. Later on, these entities exploited the 2016 US election to spread misinformation, as exemplified by Cameron Harris’s fake news masterpiece of Hillary Clinton’s pre-filled ballots. Coincidentally, several candidates and government officials in the Philippines had enlisted “troll armies” to defame their political opponents while politically “gaslighting” many Filipinos to support them.
Ironically, these disinformation peddlers hide behind the banner of freedom of speech to discredit attackers and blur the definitions of free expression.
Everyone’s got their own take on free speech, even though it’s a cornerstone of democracy. This divergence in understanding creates tensions wherein one party considers a speech acceptable while the other perceives it as harmful. Take the ongoing culture war in the United States as an example—their conservatives are big on freedom of speech, believing government-compelled speech will hinder their individual liberty. That is, no government or ideology should clamp their tongues (as ironic as it sounds). Meanwhile, liberals believe in free speech, but also back rules to stop hate speech, fake news, and discrimination.
Look into the debate over the use of preferred pronouns: to shield transgender people from discrimination, some liberals push for mandatory pronoun use, like the state of New York’s policy making it a requirement for employers to use employees’ self-identified pronouns (New York City’s Local Law No. 3); conservatives, who also believe in religious freedom, are fighting back against these rules—often crossing into the harmful territory through incendiary rhetoric, defamation, or disinformation.
Different views on “free speech” show how it’s used to protect bad opinions and even harmful acts. People twisting free speech hurt others and break down honest conversation; without rules, it’s used to lie, control, and manipulate the narratives.
To battle disinformation and correct inaccuracies, social media platforms had implemented various content moderation strategies, particularly fact-checking by third-parties or independent organizations.
Facebook teamed up with third-party fact-checkers that are certified by the International Fact-Checking Network (IFCN) to rate how accurate viral posts were and flag down misleading posts. Twitter launched Birdwatch, later renamed Community Notes, a system for adding context to potentially misleading tweets. YouTube employed automated systems to flag content, which the human reviewers would remove if it violated content. While Reddit used decentralized moderation, letting communities manage content with platform support under the policies of the subreddits’ admins or moderators.
While critics argue that content moderation may encroach on free speech, these platforms defend this implementation as essential in maintaining informed public dialogue and curbing the harmful spread of falsehoods.
The aspiration to safeguard the truth and facts in the content we consume underscores disinformation’s deliberate efforts to influence public opinion for political gain. Thus, people are likely to perceive the platform’s content moderation through a partisan lens. That is, the disinformation, fuelled by free speech, and the following corrective efforts may both operate in the political landscape.
Why is fact-checking in content moderation being replaced?
This same political landscape has ushered the downfall of fact-checking in content moderation. Conservative groups have criticized traditional fact-checking efforts for allegedly favoring certain political viewpoints, and disproportionately targeting right-leaning content. Their animosity stems from thinking the third-party fact-checkers’ content flagging makes false positives, infringing their freedom of speech through threats of content removal or deboosting.
Facing these criticisms, many social media platforms are reevaluating their content moderation strategies. January 2025 saw Meta’s third-party fact-checking program come to an end. Meta’s founder Mark Zuckerberg claims that the cultural shift in the United States has pushed him to prioritize freedom of speech over fact-checking by a third-party:
“The recent elections also feel like a cultural tipping point towards, once again, prioritizing speech… So we’re going to get back to our roots and focus on reducing mistakes, simplifying our policies and restoring free expression on our platforms.”
Mark Zuckerberg, feeling the heat from political and cultural changes, tries to smooth things over with conservatives and the recently-elected president, who’d once threatened to imprison him. Meta’s founder not only moved his company to Texas and donated money to the inaugural funds, but he also has proposed replacing fact-checking with a crowdsourcing approach, mirroring X’s Community Notes system.
YouTube followed X’s content moderation too. Eligible contributors are invited to add notes via email or Creator Studio, wherein the approved notes will appear on information panels with support of bridging-based algorithms.
Reddit co-founder Alexis Ohanian says that Mark Zuckerberg and Elon Musk ditching third-party fact-checkers is “very pragmatic,” since Reddit might do the same with Community Notes soon. At the Web Summit, he said Reddit will keep its strict rules (like banning revenge porn), but still let moderators run their subreddits. Reddit users could team up or use artificial intelligence to fact-check posts and vote on extra info, making it a community-run moderation system. Moreover, they will get to customize their own algorithm, choosing what content they see and how it’s moderated based on their tolerance levels.
What makes crowdsourced content moderation appealing to these social media founders?
Social media platforms are increasingly adopting crowdsourced content moderation systems like Community Notes for several reasons: it lets users work together to add helpful details to unclear posts, ensuring accuracy through collaboration; users, acting as contributors, may add clarifying or corrective notes to posts they believe are misleading, but they will go through a rating system. When the user’s specific note on a post has been rated as more effective and accurate than others’ notes, it will exemplify the contributors’ consensus, which represents varied perspectives. The note will then be publicly accessible.
Unlike the covert fact-checking of third-parties, these discussions and their data are openly available, fostering greater transparency. Moreover, Meta will be lifting restrictions on political content about immigration and the like, yet the platform will still be severe on content hinging on child pornography, revenge pornography, drugs, and other cases.
As social media platforms shift toward crowdsourcing, much has been debated about their benefits for brands and marketers, such as:
Increased Transparency
The community helps verify content, creating accountability and trust for brands. People like brands that are honest, accountable, and will listen to feedback.
Increased Community Engagement
If users help moderate, the community gets more involved with the brands. Brands using Community Notes can build loyalty and community by giving users a sense of benevolence, if not faux-ownership (e.g., Nestlé has taken down its bear mascot after users pointed out its resemblance to the Pedobear meme).
Cost-Effective Moderation
Working with fact-checkers and building those teams is expensive and hard to grow, especially for smaller companies. It’s cheaper for platforms and advertisers when users share the work through crowdsourcing.
Fewer Censorship Risks
With Community Notes, users are in control, making it feel less like big-brother censorship. It’s safer and easier too for brands to run campaigns now; no more random content takedowns messing with their ads (unless they violate the platform’s policies).
Monitor Conversations
Crowdsourcing content lets brands keep tabs on chatter about their products or services. Reading notes on posts about their brand helps companies find new trends, fix false info, and engage better with customers.
The appeal of crowdsourced content moderation systems lies in their potential to create a more transparent, balanced, and cost-effective approach to managing content, while also promoting user engagement and protecting freedom of expression.
However appealing, this may actually pose as a big problem in the long run for brands.

What does crowdsourced content moderation mean for marketing?
Since social media platforms will rely on users to moderate, the marketing landscape faces new uncertainties. These raise questions about brand safety, messaging control, and the true cost of free speech. When marketing your brand, be vigilant for the several flaws of the adopted Community Notes-style system:
Coordinated Disinformation
Community Notes can be rigged, messing up what notes get approved. Group collaboration risks biasing this system by insisting and rating one politically-charged note higher than the others, defeating neutral fact-checking (e.g., Pro-Israel trolls mobbed X’s community notes to praise Israel while defaming Palestine).
Inconsistent Coverage
Community Notes will need active, informed contributors to be effective. In less active or expert areas, misinformation slips through and is left unchallenged. For example, the Center for Countering Digital Hate (CCDH) has reported that 74% of misleading USA election posts lacked accurate Community Notes corrections, suggesting the absence of expert intervention.
Delayed Correction
Due to removing third-party fact-checkers while relying only on available experts, slow cross-political agreement will delay corrective responses. Delays allow misinformation to spread, reducing crowdsourcing’s effectiveness. One example is the Community Notes that faced delays in the 2023 Israel-Hamas war. Regarding a fake White House press release on the St. Porphyrius Orthodox Church in Gaza, an NBC study showed most posts lacked notes: 8% had published notes and 26% unpublished, with 68% of those who viewed them failed to receive the approved notes on time.
Poor Note Visibility
A lot of Community Notes don’t get enough agreement to go public. This means tons of fake posts go unfixed, so disinformation keeps going around.
While the Community Notes-style system is designed to uphold freedom of expression, certain groups can misuse their freedom of speech, which will be heightened by the missing expert’s oversight and indefinite trapping of the corrective crowdsourcing during consensus—just imagine your government passing a law and how excruciatingly long it actually takes.
Ending fact-checking by experts can only mean harm to society and users, revealing several drawbacks when marketing the brands, such as:
Increased Disinformation
Ending traditional fact-checking might cause a flood of false stories, potentially harming brands connected to it.
Implicated Disinformation
Some brands follow trends or piggyback on popular posts. If this post is wrong, the misled brands and their followers have spread disinformation too. Brands will blow their credibility with their followers.
Inconsistent Content Moderation
The inconsistency could mean brands get unfairly flagged or miss harmful content, hurting user trust. With dependency on various users, inherent biases would make moderation inconsistent and the brand’s messaging harder to keep the same.
Advertising Risks
No fact-checking means more harmful content, which scares advertisers and makes ad safety a problem. If you don’t have professionals managing your ads, they could end up next to disinformation content that may violate the platform’s policies, eventually hurting your brand.
Brand Imposters
Deceptive posts by imposters can gain traction, leading to potential reputational harm for the affected brands (i.e., X’s Blue verification controversy). Because it takes so long to agree on a correct note, fake news has already spread by then and people would have distrusted what the official brand would say.
In light of these challenges, brands must proactively adapt their strategies to safeguard their reputation and maintain consumer trust.
What’s next for brand marketing during this crowdsourcing era?
The digital world is risky for brands now more than ever. Disinformation, misleading narratives, and even flawed community-driven fact-checking tools like Community Notes can quickly damage a brand’s reputation. Community notes are supposed to be transparent, but they can get twisted or filled with wrong info, causing some unexpected public confusion. If your brand wants to protect its image, don’t just rely on the community to control your messaging.
Ultimately, it seems freedom of speech is winning over third-party fact-checking in today’s social media landscape. But it doesn’t have to be a win-lose situation. They can co-exist and work together. Think of it like this: free speech opens the floor, but third-party fact-checking keeps the conversation honest and grounded. Brands that understand how to blend both won’t just survive—they’ll lead the conversation.
That’s where we, Proper Digital, come in! Trust us as your keen-eyed guide through this chaotic marketplace. We’ll ensure your brand doesn’t get lost in the noise or fall victim to the peddlers abusing freedom of speech. We help brands walk that delicate line where freedom and facts meet, ensuring your voice remains clear, credible, and trusted.
We’re always on guard, separating truth from lies, and making sure your message is palatable, strong, and ahead of fake news. Because in this media marketplace, the question isn’t if disinformation is lurking. It’s whether you’ve got the right partner to make sure you don’t walk away fooled. Reach out to us today to keep your brand protected and positioned for success.