In a significant revelation just weeks before the U.S. presidential election, TikTok approved advertisements containing election-related disinformation, despite its longstanding ban on political ads, according to a report published Thursday by Global Witness, a nonprofit watchdog group. The report raises concerns about the platform’s ability to adequately enforce its own policies against election misinformation, particularly at a time when accurate information is crucial to the democratic process.
Global Witness Investigation
Global Witness, which monitors technology and environmental issues, conducted an investigation to test how well major social media companies are handling the spread of election misinformation. The group submitted a series of fake political ads to platforms like TikTok, Facebook, and YouTube, with content that included outright falsehoods about U.S. voting procedures and election security. The goal was to determine whether the platforms could effectively detect and block misleading information before it reached users.
The findings from the investigation, which mirrored a similar effort by the group two years ago, indicated some improvement in content moderation across platforms. However, TikTok’s performance raised red flags. Of the eight fake ads submitted to TikTok, four were approved by the platform’s moderation system, despite containing disinformation.
TikTok’s Policy and Failures
TikTok has maintained a ban on political advertising since 2019, part of its effort to keep its platform free from political influence and disinformation. Nevertheless, the Global Witness report revealed cracks in TikTok’s enforcement of this policy. The ads submitted by Global Witness contained dangerous and false claims about the U.S. election, including suggestions that Americans could vote online and misinformation designed to suppress voter turnout, such as the false requirement that voters must pass an English test to vote. Some ads even promoted violence or threatened election workers.
Although these ads never actually appeared on TikTok — Global Witness withdrew them before they went live — the fact that they were approved in the first stage of moderation is alarming. “Four ads were incorrectly approved during the first stage of moderation, but did not run on our platform,” said TikTok spokesperson Ben Rathe. “We do not allow political advertising and will continue to enforce this policy on an ongoing basis.”
Rathe’s statement suggests TikTok is aware of the gaps in its system and is working to improve enforcement. However, the approval of such disinformation ads ahead of a critical election raises questions about how the platform’s moderation processes can be strengthened, particularly when dealing with subtle or sophisticated forms of misinformation.
Facebook and YouTube Perform Better
Other social media platforms also underwent the Global Witness test, with varying results. Facebook, owned by Meta Platforms Inc., demonstrated a stronger ability to detect misleading political ads, approving only one of the eight disinformation ads submitted by Global Witness. In a statement, Meta downplayed the findings, stating, “While this report is extremely limited in scope and as a result not reflective of how we enforce our policies at scale, we nonetheless are continually evaluating and improving our enforcement efforts.”
YouTube, owned by Google, performed the best among the tested platforms. While YouTube initially approved four ads, none of them were published because the platform requested additional identification from the Global Witness testers before allowing the ads to go live. When the testers did not provide the required information, YouTube paused their account, preventing the disinformation from reaching the platform’s users. Global Witness acknowledged YouTube’s response but noted that it was unclear whether the ads would have been approved if the testers had provided the necessary identification.
Google did not immediately respond to requests for comment on the report.
The Bigger Picture
This investigation highlights the ongoing challenges social media platforms face in balancing free speech with the need to prevent the spread of false and harmful information, particularly during high-stakes events like elections. Although companies often enforce stricter policies for paid advertisements than for organic content shared by users, this investigation suggests that there are still significant weaknesses in their ad approval processes.
The fake ads submitted by Global Witness were not subtle or ambiguous in their intent. They included blatant falsehoods about voting procedures and were designed to mislead, suppress votes, or incite violence. The fact that even a portion of these ads were approved suggests that moderation systems are still struggling to keep pace with the tactics used by those who spread election disinformation.
As the U.S. presidential election approaches, the findings from Global Witness underscore the importance of continued vigilance by social media platforms in moderating content and enforcing their policies on political advertising. In an era where online platforms play an outsized role in shaping public discourse, the stakes could not be higher. For platforms like TikTok, Facebook, and YouTube, ensuring that disinformation is kept off their services is not just about upholding their policies but about protecting the integrity of democratic processes.