Microsoft Corporation’s MSFT Bing AI Image Creator has come under scrutiny for generating images of the popular video game character Kirby in scenarios reminiscent of terrorism, prompting discussions about the limitations and consequences of artificial intelligence-generated content.
What Happened: Despite Microsoft’s efforts to implement content filters and restrictions, individuals have managed to manipulate the AI into generating images of Kirby, the lovable pink character from Nintendo, piloting airplanes towards skyscrapers in what appears to be a grim reinterpretation of the tragic events of September 11, 2001.
See Also: Tom Hanks Exposes AI Clone In Misleading Ad, Says ‘The Polar Express’ Was Early Warning Sign
The problem lies in the inherent nature of AI.
While it can autonomously generate images based on text inputs, it cannot comprehend context and intent. Simply put, even with bans on keywords like “9/11,” “Twin Towers,” and “terrorism,” users can craft alternative descriptions that bypass these filters, leading to unsettling and potentially offensive imagery, reported Kotaku.
It is important to mention that while these AI-generated images aren’t actually related to 9/11, the uncanny resemblance can hurt people’s sentiments.
A Microsoft spokesperson responded to Kotaku, stating, “We have large teams working on the development of tools, techniques and safety systems that are aligned with our responsible AI principles. As with any new technology, some are trying to use it in ways that was not intended, which is why we are implementing a range of guardrails and filters to make Bing Image Creator a positive and helpful experience for users.”
Why It’s Important: The incident highlights the ongoing challenges posed by AI-generated content. As AI cannot discern the broader implications and sensitivities surrounding certain subjects, users are responsible for wielding this technology responsibly.
This has caused worry about false information, especially during elections.
While both Democratic and Republican teams are trying out AI and OpenAI’s ChatGPT to help them with their digital work, some people are concerned that this tech could be used to spread wrong details about things like when and where to vote.
Check out more of Benzinga’s Consumer Tech coverage by following this link.
Read Next: Microsoft’s Bing Chat Lets Users Reap Benefits Of OpenAI’s DALL-E 3 For Free
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.