Meta, the parent company of Facebook, has made a controversial announcement regarding updates to its content moderation policies, which were revealed on Tuesday. The new rules introduce significant changes to how content is managed across its platforms, including the removal of professional fact-checking in the United States, updates to automated moderation systems, and revisions to its hateful conduct policy, according to CNN.
Revised hateful conduct policy
The most striking aspect of Meta’s updated hateful conduct policy is the relaxation of previous restrictions around certain forms of harmful and derogatory content. Under the revised guidelines, content that was once prohibited is now allowed on the platform. These changes have raised alarm and concern among critics, who argue that the new rules could lead to an increase in harmful discourse.
One of the key changes allows users to refer to women as “household objects or property,” a phrase previously banned under Meta’s policies. Additionally, it is now permissible to describe transgender or non-binary individuals using the pronoun “it,” a practice that was once a violation of the platform’s rules. These changes were part of Meta’s effort to revise its policies on gender-based speech.
Another notable revision permits content that alleges mental illness or abnormality when it relates to gender or sexual orientation. Meta defended this move as part of ongoing political and religious discussions around issues like transgenderism and homosexuality. Critics, however, have argued that such content could amplify harmful stereotypes and fuel discrimination.
Furthermore, Meta has removed its prohibition on content that denies the existence of “protected” groups, such as racial, ethnic, or gender identities. This shift allows users to question whether these groups should exist, a change that many see as a concerning step towards normalising hate speech. The policy update also allows for arguments favouring gender-based restrictions in professions such as law enforcement, military service, and teaching.
Despite these changes, Meta has stated that it will continue to enforce rules against slurs, incitement of violence, and targeted harassment, particularly against protected groups based on race, ethnicity, and religion. However, the loosening of the restrictions has sparked significant backlash.
Fact-checking network disbanded
In another controversial move, Meta announced the disbanding of its US-based professional fact-checking network, which has been in place to monitor misinformation on the platform. The company revealed that this will be replaced with a user-driven “community notes” system. Under this new model, users will be able to add context to posts, allowing the community to contribute to the verification of content.
Meta explained that this change aligns with its goal of promoting free expression and reducing the over-enforcement of content. The previous automated systems, which were responsible for scanning posts for violations, will now be focused exclusively on more severe issues such as child exploitation and terrorism. Meta described this move as an attempt to reduce “over-censorship” and prevent the removal of content that does not actually violate the platform’s rules.
CEO Mark Zuckerberg acknowledged the risks associated with these changes, stating, “We’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.” While this approach aims to reduce censorship, critics argue that it may lead to a rise in harmful content going unnoticed or unaddressed.
Concerns over misinformation
The decision to replace professional fact-checking with a community-driven approach has raised alarm among disinformation researchers and online content experts. Many worry that relying on user-generated notes could lack the accountability and expertise necessary to effectively combat misinformation. Critics argue that this shift could lead to the spread of false claims, conspiracy theories, and other harmful content that could go unchecked.
While Meta emphasised that it would continue to take action against harmful misinformation when necessary, the company provided limited details on how enforcement would work under the new system. The decision has sparked a debate about the role of tech companies in moderating content and the balance between free speech and protecting users from harm.
As Meta moves forward with these changes, the platform’s ability to manage harmful content will be closely scrutinised. With the updated rules now in effect, users, experts, and advocacy groups alike will be watching to see how the changes play out in practice and whether they result in an increase in harmful speech or a reduction in censorship.