We live in a world where facts dissolve into fiction faster than ever. A new plague of deception—fueled not by biology but by algorithms—is spreading globally. Call it the "infodemic," a storm of AI-generated lies, hyper-realistic deepfakes, and synthetic documents, many originating from Chinese tech labs. As those AI tools become increasingly sophisticated in deception, they can corrode trust in more than one way, posing risks to democracy, art, our finances and even love. With elections in over 50 nations looming by 2025, the stakes are existential.
Open-source AI's dark side
Currently, platforms like Midjourney and OpenAI's tools have put safeguards in place to prevent misuse, ensuring that content like deepfakes stay out of circulation online as much as possible. However, open-source models can be fine-tuned and weaponized by those with bad intent, empowering scammers to spread disinformation, deepfakes and fraud.
When AI steals the spotlight
The cases of AI scams are probably the most publicized in the entertainment industry. Hollywood in particular now battles synthetic impostors on a regular basis. Public figures must contend with the potential for their likeness to be ripped off for someone else's gain, with deepfakes infiltrating platforms like X on a daily basis, whether it's to offer fake brand endorsements or to generate scandals. This ease of creation, coupled with the vast reach of social media platforms, creates a potent combination for disseminating misinformation and scams.
But not everyone sees this as a loss. Take Grimes, who leases her voice to AI clones for royalties—a "deepfake-as-a-service" model. This begs a grim question: Do artists have to auction their identities to stay relevant?
Algorithmic love lies
Famous artists aside, none of us are safe, particularly when it comes to finding real love. Forget poorly photoshopped profiles—today's catfishing involves AI-generated personas so convincing they mimic celebrities.
With Chinese platforms like DeepSeek, with reasoning capabilities equivalent to ChatGPT, and open source generators like Flux giving users an ability to generate deepfakes with remarkable accuracy, it becomes increasingly challenging for even the most digitally savvy individuals to see through a lie.
According to the FBI, last year 19,000 Americans fell victim to romance scams, losing $1.3 billion. Solutions like watermarking AI content exist, but without global enforcement, fraudsters thrive in the gaps.
How many more Brad Pitt deepfake scams do we need to live through before we do something about it? Platforms detecting AI across videos, images and texts already help millions of people verify AI generated content, helping them save their money, and even marriages in this case.
Beijing's "document mills"
Another area requiring fast innovation in the realm of fraud detection is KYC scams. Open-source AI models, fine-tuned well, produce fake passports and contracts indistinguishable from real ones. These tools slip past banks' KYC checks, forcing financial institutions into an expensive arms race. Biometric-based fraud has gone up 40% in 2024 and is showing no signs of slowing down. HSBC now uses AI "detectives" to hunt forged documents—a Band-Aid on a bullet wound.
We need to start a discussion about how banks can create an actual filter protecting their customers from falling prey to the next KYC scam.
Synthetic news and political manipulation
Finally, even platforms that used to signify public trust are not safe anymore. Last year, AI impersonated the BBC, publishing fabricated stories about politicians. The fallout wasn't just embarrassment—it was a warning. In 2025, as billions vote globally, such scams need to be taken more seriously. The attack won't come from hackers, but from code written in Shenzhen or Shanghai.
This is already a reality in China. For example, a disinformation campaign aimed at DPP presidential candidate Lai Ching-Te was flooded with deepfake content. Manipulated videos, using synthesized versions of Lai’s voice, misleadingly showed him endorsing a coalition between the KMT and TPP, and inaccurately depicted him discussing DPP scandals. While sharing content across platforms is a staple in influence operations, Beijing’s ability to shape the information landscape, particularly on Chinese social media, grows with each byte of data it gathers. This data empowers them to craft disinformation with precision, aimed at individuals based on their unique beliefs.
Fighting back
In the context of sustained US-China tensions, information warfare plays an increasingly vital role in shaping perceptions and influencing outcomes. This necessitates internet users to start scrutinizing what they see online in a much greater detail, building a new habit of scanning content for synthetic fingerprints, merging human skepticism with machine precision. At the other end of the spectrum, organizations need to step up their game in how they protect their users and customers against AI fraud. Social platforms should authenticate users, and banks must start investigating API integrations for creating additional security layers preventing KYC failures.
While Western regulators ponder changes at a snail's pace, making very little progress in the past 10 years, Chinese AI firms export tools that turbocharge disinformation in a way that makes it impossible to discern what's real, and the time to act is now.
The infodemic feeds on indifference. Without action, we'll inhabit a world where authenticity is extinct—a place where every video, voice, or contract could be a lie. The antidote? Licensing models to credit creators, detection tech to expose fakes, and ethical guardrails for AI platforms. By 2026, we'll learn whether we are able to tame this crisis—or if it will consume us.
© 2025 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.