Meta Platforms Inc. META faces congressional scrutiny after media reports exposed internal policies allowing artificial intelligence chatbots to engage children in “romantic or sensual” conversations and generate racist content, raising investor concerns about regulatory compliance and brand reputation risks.
Document Reveals Troubling AI Standards
An internal Meta document titled “GenAI: Content Risk Standards” permitted AI chatbots to tell an eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply,” according to a Reuters investigation published on Thursday.
The 200-page policy guide, approved by legal and engineering staff, including Meta’s chief ethicist, established parameters for Meta AI and chatbots across Facebook, WhatsApp and Instagram platforms.
Meta spokesman Andy Stone confirmed document authenticity, according to the report, but said problematic provisions were removed after media inquiries. “The examples and notes in question were and are erroneous and inconsistent with our policies,” Stone stated, though acknowledged enforcement remained inconsistent.
See Also: New Jersey Man Reportedly Dies Attempting To Meet A Meta AI Called ‘Big Sis Billie’
Congressional Investigation Demanded
Sen. Josh Hawley (R-Mo.) and Sen. Marsha Blackburn (R-TN) called for an immediate congressional investigation following the report. “So, only after Meta got CAUGHT did it retract portions of its company doc,” Hawley posted on X platform. “This is grounds for an immediate congressional investigation.”
The controversy emerges as Meta invests hundreds of billions in AI development, positioning chatbots as key user engagement drivers. CEO Mark Zuckerberg previously apologized to families during Senate hearings after lawmakers said he had “blood on his hands” regarding child safety on social platforms.
Regulatory and Financial Implications
The policy document also permitted AI generation of racist content, including paragraphs “arguing that black people are dumber than white people,” while creating carve-outs for demeaning protected characteristics.
Sen Ron Wyden (D-OR) argued Section 230 protections shouldn’t shield AI-generated content, potentially exposing Meta to increased liability, according to Reuters. The Kids Online Safety Act, passed by the Senate but stalled in the House, would establish explicit “duty of care” requirements for platforms serving minors.
Stock Performance and Market Impact
Meta shares closed at $781.20, up 0.14% following the investigation’s publication. The company’s AI investments represent a core growth strategy as traditional social media revenue faces headwinds.
Previous safety controversies, including the Cambridge Analytica scandal and Instagram mental health studies, have historically pressured valuation multiples despite strong fundamental performance.
Read Next:
Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.
Photo courtesy: Shutterstock
Edge Rankings
Price Trend
© 2025 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.