WhatsApp Feature Exposed For Producing Inappropriate Child Images Amidst Palestine Searches

In an alarming incident, WhatsApp, a messaging platform owned by Meta Platforms Inc. META, has been found to generate inappropriate images of children brandishing firearms when users perform searches related to Palestine, as reported by The Guardian.

What Happened: The Guardian’s investigation found that WhatsApp’s artificial intelligence (AI) feature generated images of a gun or a boy with a gun in response to searches for “Palestinian,” “Palestine,” or “Muslim boy Palestinian.” 

This wasn’t a consistent result, with user variations, but the presence of such images was confirmed via screenshots and independent tests. 

In stark contrast, searches for “Israeli boy” resulted in images of children engaged in benign activities such as playing soccer or reading, while “Israel army” showed illustrations of smiling, praying soldiers without any firearms.

See Also: House Democrats Mock Speaker Mike Johnson For Fundraising Email Typo: ‘I Refuse To Put People Over Politics’

According to an insider, Meta’s employees have raised concerns about this issue internally. 

The AI feature, designed to “create a sticker” and “turn ideas into stickers with AI,” has come under fire amid accusations of bias and censorship against Meta from Instagram and Facebook users. 

These users, who support Palestinians, allege unfair moderation policies. 

Meta has denied these claims. 

Kevin McAlister, a Meta spokesperson, acknowledged the issue, saying, “As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems,” adding, “We’ll continue to improve these features as they evolve and more people share their feedback.”

Meanwhile, Australian Senator Mehreen Faruqi has called for an investigation into Meta over the “racist and Islamophobic imagery” generated by its AI. 

Why It Matters: This incident adds to the ongoing tension surrounding the Israel-Hamas conflict. Earlier, it was reported that more than 80% of American voters fear that the U.S. could be dragged into the ongoing conflict. 

The WhatsApp AI feature’s contentious results may further fuel these concerns. 

Moreover, this is not the first tech giant to face backlash over its handling of the conflict. 

Previously, it was reported that Apple paused its internal Slack channels for Muslim and Jewish employees following the removal of posts related to the Israel-Hamas conflict.

These developments underscore the ongoing challenges tech companies face in ensuring fair and unbiased algorithms, especially in the context of global conflicts.

Photo by Christian Wiediger on Unsplash

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next: Ford Announces Layoffs For 1,600 Staff At Louisville Assembly Plant


Engineered by Benzinga Neuro, Edited by Ananya Gairola


The GPT-4-based Benzinga Neuro content generation system exploits the extensive Benzinga Ecosystem, including native data, APIs, and more to create comprehensive and timely stories for you. Learn more.


Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In: NewsTechMediaartificial intelligencebenzinga neuroConsumer TechFacebookInstagramSlackSoftware & AppsWhatsApp
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!