The rise of deepfake videos of political leaders in Asia has sparked concerns about potential election interference. The use of deepfake technology to create convincing fake videos of political leaders has been on the rise, raising questions about the region’s readiness to combat this form of misinformation.
What Happened: The increasing prevalence of deepfake videos of political leaders in Asia. These videos have the potential to significantly impact the upcoming global elections, with at least 60 countries and over four billion people set to vote for their leaders and representatives in 2024, reported CNBC.
According to a report by Sumsub in November, the global incidence of deepfakes increased tenfold from 2022 to 2023. In the Asia-Pacific (APAC) region, deepfake occurrences skyrocketed by 1,530% during the same timeframe.
The report cited several instances of deepfake videos being used to influence elections. In Indonesia, a deepfake video of the late President Suharto endorsing a political party went viral ahead of the Feb. 14 elections. Similar incidents were reported in Pakistan and the U.S., raising concerns about the potential impact of deepfakes on the democratic process.
Simon Chesterman, Senior Director of AI Governance at AI Singapore, warned that Asia is ill-prepared to address the threat of deepfakes in elections, citing a lack of regulation, technology, and education in the region.
“Although several governments have tools (to prevent online falsehoods), the concern is the genie will be out of the bottle before there's time to push it back in,” Chesterman said.
CrowdStrike, a cybersecurity company, highlighted in its 2024 Global Threat Report that with numerous elections scheduled this year, there is a high probability of nation-state actors, including those from China, Russia, and Iran, engaging in misinformation or disinformation campaigns to instigate disruption.
In February, 20 prominent technology firms, including Microsoft, Meta, Google, Amazon, IBM, alongside artificial intelligence startup OpenAI, and social media platforms like Snap, TikTok, and X, pledged a collective effort to address the deceptive utilization of AI during this year’s elections.
Why It Matters: The rise of deepfake technology has been a cause for concern across various sectors. In a recent incident, fraudsters used deepfake technology to steal $25 million in a sophisticated corporate scam. The criminals impersonated the company’s CFO and other staff members during a video call, highlighting the potential for deepfakes to be used for financial fraud.
Meanwhile, social media platforms have been grappling with the spread of deepfake content. In response to the circulation of explicit AI-generated images of Taylor Swift, Elon Musk‘s social media platform, X, temporarily halted searches for the pop icon. The incident underscored the challenges faced by tech companies in addressing the spread of deepfake content.
Regulation of deepfake content has also been a contentious issue for social media platforms. In a recent case, Meta’s Oversight Board urged the company to revisit its policy on manipulated media, describing the rules as “incoherent and confusing to users.” The board recommended extending the policy to cover audio and video content, regardless of AI usage, to improve transparency around deepfake content.
Deepfake AI | Photo by Sander Sammy on Unsplash
Engineered by Benzinga Neuro, Edited by Kaustubh Bagalkote
The GPT-4-based Benzinga Neuro content generation system exploits the extensive Benzinga Ecosystem, including native data, APIs, and more to create comprehensive and timely stories for you. Learn more.
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.