Combating Deepfake Disruption in Asia

In Asia, these risks are amplified by diverse linguistic landscapes, complex political environments, and a high penetration of mobile and digital services. According to Grand View Research, the global deepfake AI market size was estimated to be worth USD 562.8 million in 2023 and is projected to grow at a compound annual growth rate (CAGR) of 41.5% from 2024 to 2030.

In the region, more than 2.8 billion people are connected to the internet. This massive digital audience offers both a ripe target and a potential shield, depending on how countries and telecoms respond.

Telecommunication operators across Asia are no longer just internet providers; they’re digital gatekeepers. With their access to user data flow and infrastructure control, telcos are uniquely positioned to play a significant role in countering deepfakes.

In Singapore, Singtel has launched an AI cloud service to democratize artificial intelligence (AI) for enterprises and the public sector. As part of this effort, it signed a memorandum of understanding (MoU) with Hive, whose enterprise-grade models specialize in detecting deepfakes, generative AI (GenAI) content, and other harmful media. Leveraging NVIDIA chips and Singtel’s AI infrastructure, Hive provides clients with access to state-of-the-art detection tools suited for sensitive data environments.

In 2024, HONOR unveiled a real-time deepfake detection system at Mobile World Congress Shanghai, which is embedded in its smartphones and can identify manipulation during video calls. Moreover, Aletheia, a browser plug-in and endpoint software, can detect deepfakes by analyzing pixels and audio frequencies with up to 90% accuracy. Singapore’s ST Engineering developed Einstein.AI, which flags facial and audio anomalies in media content to protect public trust, especially ahead of elections.

Recognizing the increasing prevalence of online scams, cyberbullying, and misinformation in the digital space, CelcomDigi is taking a proactive approach to ensure content authenticity. As part of its broader initiative to promote online safety, the company hosted two exclusive Online Safety and Anti-Scam Masterclasses to empower content creators, social media influencers, and radio presenters to become advocates for digital safety. CelcomDigi’s Head of Sustainability, Philip Ling, explained:

The future of deepfake defense in Asia lies in collaboration. As the World Economic Forum points out, combating deepfakes requires a “whole-of-society” approach, involving governments, private companies, academia, and civil society.

Proposed regional strategies to combat deepfake disruptions in Asia include the introduction of the Expanded ASEAN Guide on AI Governance and Ethics – Generative AI, which illustrates its policy recommendations through four detailed use cases highlighting public and private institutions in the region that are implementing responsible AI practices. These include PhoGPT and VinAI in Vietnam, which focus on ethical generative AI development; Accenture’s Responsible AI Internal Programme, applied across ASEAN; Singapore’s Project Moonshot, led by the AI Verify Foundation to build trustworthy AI frameworks; and Thailand’s ThaiLLM, a collaborative effort by BDI, NSTDA, VISTEC, and other partners to develop large language models (LLMs) under ethical guidelines.

In 2024, the International Telecommunications Union’s (ITU) ‘AI for Good Global Summit’ brought together technology and media companies, artists, international organizations, standardization bodies, and academia, to discuss the security risks and challenges of deepfakes and generative artificial intelligence (AI). ITU experts predict that 90% of online content will be AI-generated in 2025; hence, they identified that the focus has shifted to developing technical standards for watermarking and verifying content authenticity. These efforts aim to distinguish between human-generated, AI-generated, and hybrid content, providing a reliable framework for content validation and helping combat misinformation in an increasingly synthetic digital environment.

In 2025, the Philippine government launched the Asia-Pacific Deepfake Task Force and rolled out an artificial intelligence-powered detection tool to combat disinformation and potential election fraud in light of the upcoming May elections. According to Cybercrime Investigation and Coordinating Center (CICC) Undersecretary, Alex Ramos, this initiative is part of a broader strategy to empower citizens against the escalating threat posed by deepfakes.

“This tool will be distributed to accredited institutions, including election watchdogs like the Parish Pastoral Council for Responsible Voting (PPCRV), universities, and fact-checking groups,” Ramos explained. “During community gatherings, if someone reports suspicious content, it can be analyzed quickly using this tool.”

Deepfakes can be considered both insidious and intelligent; it’s a technological feat with the potential to harm or help, depending on its application. Hence, APAC-based telcos, governments, and stakeholders are collectively adopting a unified approach to address the challenges posed by synthetic media in the region.

In Asia, where digital growth is outpacing regulation, the challenge is formidable; however, through forward-looking legislation, public-private partnerships, and telecom-driven innovation, the region is forging a resilient path forward.

    Leave a Reply

    Your email address will not be published. Required fields are marked *