Our Own Worst Enemy: AI A Lesser Threat To Humanity Than Humans Are

Over the weekend, Tesla Inc TSLA CEO Elon Musk co-signed a letter with industry experts calling for regulation of artificial intelligence development and warning of its existential threat.

“However, Musk and the other AI debaters underestimate the biggest threat to humanity in the AI era: humans,” Loup Ventures analyst Doug Clinton wrote in a Monday note.

Clinton considers the human response to AI as dangerous as any hostile system.

Our Inner Demons

Even if AI fulfills the best intended purposes and leads to widespread prosperity, it would establish a world radically different ━ rendering manual and some intellectual labor obsolete, divorcing human value from work and forcing a new understanding of the meaning of life.

“Humans will need to find new purpose outside of work, likely in the uniquely human capabilities of creativity, community and empathy, the things that robots cannot authentically provide,” Clinton wrote.

The disruptions could be cataclysmic. In fact, Clinton’s worst-case-scenario results in fear of a similar degree to that driving recent riots: fear that fosters hate and subsequent rebellion against robots and their human allies; that leads people to unite behind anti-AI leaders; that “leave us with a world looking more like the Walking Dead than utopia.”

The possibilities call for preparation.

“We need to prepare humans for a post-work world in which different skills are valuable,” he wrote. “We need to consider how to distribute the benefits of AI to the broader population via a basic income. We need to transform how people think about their purpose.”

The Veritable Danger Of AI

While Clinton concedes a “non-zero chance” that AI either intentionally or inadvertently destroys humanity, he considers it less likely than human-driven destruction.

For one, AI capable of malevolence is yet a distant dream. Such technology implies intent built on human-level intelligence, which is, by Clinton’s estimates, several decades away.

At the same time, human inventors have successfully shut down harmless AI systems that stray into unexpected activity, which sets a positive procedural precedent for future issues with more advanced AI.

“The warning bells on AI are valid given the severity of the potential negative outcomes (even if unlikely), and some form of AI regulation makes sense, but it must be paired with plans to make sure we address the human element of the technology as well,” Clinton said.

Related Links:

How The U.S., China And Russia Are Moving Toward Weaponizing Artificial Intelligence

What Is Machine Learning? Deep Learning? Here's Your AI Glossary

Market News and Data brought to you by Benzinga APIs
Comments
Loading...
date
ticker
name
Price Target
Upside/Downside
Recommendation
Firm
Posted In:
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!