Jenna Ortega revealed she deleted her Twitter (now known as X) account after receiving artificial intelligence-generated explicit images of herself as a child.
What Happened: Ortega, 21, shared her experience during a conversation on The New York Times’s “The Interview” podcast. She expressed her disdain for AI, stating she was sent AI-generated images of herself as a child on Twitter.
“I hate AI,” Ortega said when asked about the technology, which can create realistic images and videos, including deepfake explicit images. She described the experience as terrifying and corrupt.
Ortega, known for her roles in “Stuck in the Middle” and “Jane the Virgin,” mentioned she was advised to join Twitter to build her image. However, she received an unsolicited explicit photo as her first direct message at age 12.
She deleted the app “about two, three years ago” due to an influx of disturbing images. Ortega stated the content made her feel uncomfortable and disgusted.
“Did I like being 14 and making a Twitter account because I was supposed to and seeing dirty edited content of me as a child? No. It's terrifying. It's corrupt. It's wrong,” she said in the interview.
In response to such incidents, Rep. Joseph Morelle (D-N.Y.) proposed the Preventing Deepfakes of Intimate Images Act in 2023, aiming to criminalize the sharing of digitally altered explicit images. The bill is currently with the House Committee on the Judiciary.
Why It Matters: The rise of AI-generated explicit content has prompted significant actions from tech giants and policymakers. In April, Meta Platforms Inc. expanded its labeling policy for AI-generated content across Facebook, Instagram, and Threads, following recommendations from its Oversight Board. This policy now includes photos and audio, not just videos.
In May, OpenAI introduced new AI tools to detect images created by its DALL-E generator. These tools include advanced watermarking techniques to better identify AI-generated content and a framework called Model Spec to guide future AI tool behaviors.
By August, Google had implemented measures to combat explicit deepfake content in its search results. The tech giant introduced new online safety features to simplify the removal of explicit deepfakes and prevent them from ranking highly in search results.
However, not all AI advancements have been positive. In mid-August, Elon Musk’s Grok AI chatbot faced backlash for allowing users to generate offensive images, including inappropriate depictions of political figures and celebrities. This controversy highlighted the ongoing challenges in regulating AI-generated content.
Ortega is not the first to face such issues. Earlier this year, Taylor Swift was targeted with similar AI-generated explicit content on Twitter, leading to temporary search blocks for her name.
Last week, former President Donald Trump stirred controversy by sharing manipulated images on his Truth Social platform, falsely implying that pop star Taylor Swift and her fans were endorsing his presidential campaign.
Trump's post, captioned "I accept!" featured an AI-generated campaign poster of Swift donned in a patriotic top hat, with the message, "Taylor Wants You To Vote For Donald Trump."
Read Next:
Image Via Shutterstock
This story was generated using Benzinga Neuro and edited by Kaustubh Bagalkote
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.