Technology

Deepfakes and Disinformation: The AI Threat to Truth

{clean_title}
Alanbatnews -

The rise of sophisticated deepfake technology, fueled by artificial intelligence, poses a significant threat to truth and stability, particularly during crises and conflicts.

Deepfakes, AI-generated content that can convincingly mimic reality, are being increasingly weaponized to spread disinformation, manipulate public opinion, and even undermine military morale.

The potential for misuse is vast, ranging from creating fabricated news events to influencing elections and exacerbating social divisions.

One recent example involves AI-generated images falsely purporting to show the death of Iranian Supreme Leader Ali Khamenei, which were circulated on social media alongside a cyberattack on Iranian networks.

This incident, uncovered by Al Jazeera Net's Open Source team, highlights how deepfakes can be used in coordinated campaigns to destabilize and demoralize.

The Russia-Ukraine conflict has also seen widespread use of deepfakes, including fabricated videos of Ukrainian soldiers surrendering and a false announcement of a full military mobilization by Russian President Vladimir Putin, according to NBC News.

The International Federation of Journalists (IFJ) reported that during the conflict between India and Pakistan, Indian media outlets broadcasted deepfake videos showing Indian missiles intercepting and destroying Pakistani drones, filmed from angles impossible to capture in reality.

Furthermore, The Strategist revealed instances of deepfake videos featuring exiled Hong Kong activists expressing concerns about agreements between Britain and China, potentially leading to their extradition to Chinese authorities.

These examples illustrate the diverse range of purposes served by deepfakes, from political and military manipulation to supporting hidden agendas.

In response to this growing threat, a new field of cybersecurity called "misinformation security" is emerging, according to Splunk, a Cisco company. This specialization focuses on detecting, preventing, and responding to deepfake-generated misinformation to protect businesses, operations, and economies.

The World Economic Forum has identified disinformation as a major risk to economies, stock markets, and overall stability, underscoring the global concern surrounding deepfakes.

This emerging field relies on digital forensic tools to verify content authenticity and intersects with narrative intelligence, which involves continuous monitoring of online platforms and social media.

The danger of deepfakes lies in their ability to deceive the public, who tend to believe what they see and hear, especially given the lack of widespread awareness about AI advancements and how to counter them.

Governments are increasingly aware of these risks, prompting NATO to partner with Realty Defenders, a company specializing in deepfake detection and mitigation.

This collaboration aims to train experts in identifying and handling fake content during times of war and general mobilization, indicating the significant impact deepfakes can have on major global powers.

Alongside Realty Defenders and the rise of misinformation security, numerous companies and experts are focusing on developing tools and methods to detect fake content.

Many of these efforts involve searching for telltale signs of AI manipulation and examining metadata embedded within images and videos.

News organizations are also intensifying their efforts to identify AI-generated deepfakes, using specialized teams and tools to verify content, similar to Al Jazeera Net's Open Source team.