Deepfake incidents are on the rise in 2024, with a projected increase of 60% or more this year, pushing global cases to 150,000 or higher. This surge in AI-powered deepfake attacks makes them the fastest-growing type of adversarial AI today. Deloitte predicts that these attacks will result in over $40 billion in damages by 2027, with banking and financial services being the primary targets.
The use of AI-generated voice and video fabrications is causing a blur in the lines of believability, eroding trust in institutions and governments. Deepfake tradecraft has become so prevalent in nation-state cyberwarfare organizations that it has evolved into a mature attack tactic in constant engagement between cyberwar nations.
Srinivas Mukkamala, Chief Product Officer at Ivanti, highlights the evolution of AI, such as Generative AI and deepfakes, from mere misinformation to sophisticated tools of deception. AI advancements have made it increasingly difficult to differentiate between genuine and fabricated information.
According to a report by Gartner, 62% of CEOs and senior business executives anticipate that deepfakes will lead to operating costs and complications for their organizations in the next three years, with 5% considering it an existential threat. By 2026, Gartner predicts that 30% of enterprises will no longer view face biometrics as a reliable identity verification and authentication solution due to AI-generated deepfake attacks.
Recent research conducted by Ivanti reveals that over half of office workers are unaware that advanced AI can impersonate anyone’s voice, raising concerns as these individuals participate in upcoming elections.
The U.S. Intelligence Community’s 2024 threat assessment indicates that Russia is leveraging AI to create deepfakes and deceive experts. The Department of Homeland Security has issued a guide on the increasing threats of deepfake identities.
OpenAI’s latest model, GPT-40, is designed to detect and prevent these growing threats. As an autoregressive omni model, GPT-40 can accept input in the form of text, audio, image, and video. The model only utilizes pre-selected voices and employs an output classifier to detect deviations from the approved voices.
Key features of GPT-40 that enhance its ability to identify deepfakes include Generative Adversarial Networks (GANs) detection, voice authentication, output classifiers, and multimodal cross-validation. The model can identify imperceptible discrepancies in the content generation process, even those that GANs cannot fully replicate.
Deepfake attacks on CEOs are becoming more prevalent, with sophisticated attackers targeting prominent figures in various industries. Recent incidents have showcased how deepfakes can be used to deceive individuals into authorizing significant financial transfers.
The critical role of trust and security in the AI era is paramount. OpenAI’s focus on design goals that prioritize deepfake detection reflects the future of AI models. Christophe Van de Weyer, CEO of Telesign, emphasizes the importance of prioritizing trust and security to combat digital fraud and safeguard personal and institutional data.
As businesses and governments increasingly rely on AI, models like GPT-40 become essential in securing systems and protecting digital interactions. Maintaining skepticism and critically evaluating information for authenticity are crucial defenses against deepfakes.