Meta Addresses Content Moderation Concerns
On Wednesday, Meta issued an apology regarding the graphic and violent content that was being recommended on Instagram Reels. The company acknowledged an error that led to users encountering disturbing videos in their feeds.
A spokesperson for Meta stated, “We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended,”.
Users across the globe reported witnessing a surge of short-form videos showcasing horrific content, including violence and killings, marked with the “sensitive content” label but still appearing frequently in recommendations.
Meta, the parent company of Facebook, Instagram, and Threads, claims to proactively remove “particularly violent or graphic” content and applies warning labels to others. The company also imposes restrictions on users under 18 to prevent them from accessing such material.
Earlier this year, in January, Meta transitioned from using third-party fact-checkers on its US platforms to adopting a community notes model for flagging content. Joel Kaplan, the chief global-affairs officer, mentioned plans to “simplify” its content policies, expressing intentions to remove various restrictions on topics like immigration and gender that are perceived to be misaligned with contemporary discourse.
Since 2016, Meta has faced numerous controversies regarding its content moderation practices, including criticism for its involvement in illegal drug sales. Last year, founder Mark Zuckerberg participated in a Congressional hearing addressing online safety measures for children.
Globally, Meta’s insufficient content moderation and dependence on external civil society groups for reporting misinformation have been linked to an escalation of violence in regions such as Myanmar, Iraq, and Ethiopia.
Interestingly, Zuckerberg’s approach to content moderation changes mirrors the actions taken by Elon Musk on X, a social media platform he acquired in 2022.