Meta Under Fire for Approving Islamophobic Ads during India’s Election

Meta Under Fire for Approving Islamophobic Ads during India’s Election

May 22, 2024 Off By Sharp Media

An investigation has unveiled that Meta, the parent company of Facebook and Instagram, approved advertisements inciting violence against Muslims during India’s recent general election. This revelation has sparked widespread outrage and concerns over Meta’s ability to enforce its policies and prevent the spread of hate speech.

The investigation, conducted by Ekō and India Civil Watch International, submitted 22 inflammatory advertisements to Meta’s ad library. These ads contained hate speech targeting Muslims, despite Meta’s stated community standards against such content. Shockingly, 14 of these ads were approved within 24 hours, and three more were approved after minor modifications.

The approved advertisements included alarming phrases to incite hatred against Muslims in India. They also propagated false claims about political leaders and called for executions. Only five ads were rejected, highlighting a significant lapse in Meta’s content moderation process.

This situation has raised serious questions about Meta’s effectiveness in enforcing its policies, especially in the context of rising anti-Muslim sentiment in India. The company’s failure to prevent the approval and dissemination of these ads has drawn sharp criticism from various quarters.

Meta has faced similar criticism in the past regarding its handling of Islamophobic content. This latest incident has amplified calls for greater accountability from social media companies in controlling hate speech and disinformation on their platforms. Critics argue that Meta’s inability to effectively address these issues not only violates its own policies but also contributes to a climate of fear and hostility.

The approval of these advertisements during a critical time like a general election is particularly troubling. Elections are periods when disinformation and inflammatory rhetoric can have especially dangerous consequences, potentially inciting real-world violence and exacerbating societal divisions. Meta’s failure to intercept these ads highlights a significant vulnerability in its platform that can be exploited to spread harmful content.