OpenAI Fears 'Users Might Form Social Relationships' With AI Due To ChatGPT's Human-Like Voice Mode

OpenAI has expressed concerns about the potential emotional reliance of users on its new ChatGPT voice mode, which closely mimics human speech patterns.

What Happened: OpenAI, the company behind ChatGPT, has raised concerns about the possibility of users forming emotional bonds with the AI, potentially leading to a reduced need for human interaction.

The Microsoft Corp.-backed company fears that this could affect healthy relationships and lead to an over-reliance on AI, given its potential for errors.

The report, released on Thursday, highlights a broader risk associated with AI. Tech companies are rapidly rolling out AI tools that could significantly impact various aspects of human life, without a comprehensive understanding of the implications.

“Human-like socialization with an AI model may produce externalities impacting human-to-human interactions. For instance, users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships.” the report said.

It added “Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and ‘take the mic' at any time, which, while expected for an AI, would be anti-normative in human interactions”

The report highlights the risk of users trusting the AI more than they should, given its potential for errors. OpenAI plans to continue studying these interactions to ensure the technology is used safely.

See Also: Jeff Bezos Once Said ‘Life’s Too Short To Hang Out With People Who Aren’t Resourceful’

Why It Matters: The rise of AI has been a topic of concern for various experts. A Pew Research survey found that 52% of Americans are more concerned than excited about the increased use of AI. This wariness coincides with an uptick in awareness about AI, with individuals who are most aware expressing more concern than excitement about AI.

AI’s potential negative effects have also been highlighted in the context of cybersecurity. Sakshi Mahendru, a cybersecurity expert, emphasized the need for AI-powered solutions to combat the evolving landscape of cyber threats.

Moreover, the phenomenon of AI “hallucination,” where AI generates nonsensical or irrelevant responses, remains a significant issue. Even Tim Cook, CEO of Apple Inc. , admitted in a recent interview that preventing AI hallucinations is a challenge.

Read Next:

Image Via Shutterstock

This story was generated using Benzinga Neuro and edited by Kaustubh Bagalkote

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!