Rise Of Deepfake Videos In Asia Raises Alarms Over Election Misinformation: Report


The rise of deepfake videos of political leaders in Asia has sparked concerns about potential election interference. The use of deepfake technology to create convincing fake videos of political leaders has been on the rise, raising questions about the region’s readiness to combat this form of misinformation.

According to a report by Sumsub in November, the global incidence of deepfakes increased tenfold from 2022 to 2023. In the Asia-Pacific (APAC) region, deepfake occurrences skyrocketed by 1,530% during the same timeframe. The report cited several instances of deepfake videos being used to influence elections. In Indonesia, a deepfake video of the late President Suharto endorsing a political party went viral ahead of the Feb. 14 elections. Similar incidents were reported in Pakistan and the U.S., raising concerns about the potential impact of deepfakes on the democratic process.

Simon Chesterman, Senior Director of AI Governance at AI Singapore, warned that Asia is ill-prepared to address the threat of deepfakes in elections, citing a lack of regulation, technology, and education in the region. “Although several governments have tools (to prevent online falsehoods), the concern is the genie will be out of the bottle before there’s time to push it back in,” Chesterman said.

CrowdStrike, a cybersecurity company, highlighted in its 2024 Global Threat Report that with numerous elections scheduled this year, there is a high probability of nation-state actors, including those from China, Russia, and Iran, engaging in misinformation or disinformation campaigns to instigate disruption.

In February, 20 prominent technology firms, including Microsoft, Meta, Google, Amazon, IBM, alongside artificial intelligence startup OpenAI, and social media platforms like Snap, TikTok, and X, pledged a collective effort to address the deceptive utilization of AI during this year’s elections.

The rise of deepfake technology has been a cause for concern across various sectors. In a recent incident, fraudsters used deepfake technology to steal $25 million in a sophisticated corporate scam. The criminals impersonated the company’s CFO and other staff members during a video call, highlighting the potential for deepfakes to be used for financial fraud.

Meanwhile, social media platforms have been grappling with the spread of deepfake content. In response to the circulation of explicit AI-generated images of Taylor Swift, Elon Musk‘s social media platform, X, temporarily halted searches for the pop icon. The incident underscored the challenges faced by tech companies in addressing the spread of deepfake content.

Regulation of deepfake content has also been a contentious issue for social media platforms. In a recent case, Meta’s Oversight Board urged the company to revisit its policy on manipulated media, describing the rules as “incoherent and confusing to users.” The board recommended extending the policy to cover audio and video content, regardless of AI usage, to improve transparency around deepfake content.

As the use of deepfake technology continues to evolve, it is crucial for governments, technology companies, and the public to remain vigilant and proactive in combating the spread of misinformation through deepfake videos. The upcoming global elections in 2024 present a critical test for the region’s ability to address this growing threat and safeguard the integrity of democratic processes.

Leave a Reply

Your email address will not be published. Required fields are marked *