AI systems have made significant advancements in spotting deepfake images, surpassing human capabilities in many cases. However, when it comes to detecting deepfake videos, humans still hold an advantage over machines. A recent study conducted by psychologist Natalie Ebner and her team highlights the need for collaboration between humans and AI to effectively combat the rising threat of digital forgeries.
Deepfakes, which are AI-generated images, audio, and videos that manipulate reality, have been used for malicious purposes such as financial fraud, election interference, and reputation damage. As these digital forgeries become increasingly sophisticated, both humans and AI models struggle to distinguish between real and fake content.
In a series of experiments involving over 2,200 participants and two machine learning algorithms, researchers assessed the ability to detect deepfake images and videos. While the machines outperformed humans in identifying fake images with one algorithm achieving 97% accuracy, humans excelled in detecting deepfake videos, surpassing the algorithms’ performance.
The study involved participants rating the realism of faces and videos on a scale of authenticity. Surprisingly, humans were able to discern deepfake videos with 63% accuracy, outperforming the algorithms that struggled to identify the manipulated content.
Ebner and her team are now delving deeper into understanding the decision-making processes of both humans and AI systems. By investigating the factors that contribute to the success of machine algorithms in certain scenarios, the researchers aim to uncover insights that can enhance collaborative efforts between humans and AI in combating deepfakes.
The findings of this study underscore the importance of leveraging the strengths of both humans and machines in addressing the challenges posed by deepfakes. As the prevalence of digital forgeries continues to grow, a combined approach that combines human intuition with AI’s analytical capabilities will be crucial in safeguarding against the harmful effects of manipulated content.
Overall, the study emphasizes the need for ongoing research and collaboration to develop effective strategies for identifying and countering deepfakes. By gaining a deeper understanding of how humans and machines perceive and analyze digital content, we can better prepare for a future where deepfakes pose a significant threat to societal trust and security.

