Bias-free? an empirical study on ethnicity, gender, and age fairness in deepfake detection
Source
ACM Computing Surveys
ISSN
0360-0300
Date Issued
2026-01-01
Author(s)
Panda, Aditi
Ghosh, Tanusree
Choudhary, Tushar
Naskar, Ruchira
Abstract
In this study, we evaluate potential demographic bias in state-of-the-art deepfake image detection models across three key attributes: age, ethnicity, and gender. Unlike prior works that retrain detectors or analyse forensic manipulations, we systematically assess multiple pretrained checkpoints of leading deepfake detectors, each trained on different datasets, to ensure an unbiased evaluation framework. Our experiments employ synthetic images generated by recent diffusion and autoregressive models, alongside real images from balanced datasets, to measure subgroup-specific detection performance. Results reveal no systematic bias across demographic categories—variations in accuracy and precision remain within small statistical margins across all detectors and checkpoints. We further provide a taxonomy of image generative models, highlighting their evolution from pixel-space to latent-space diffusion architectures, to contextualize the diversity of synthetic data used in our evaluation. Overall, our findings suggest that modern deepfake image detectors, when tested in a cross-demographic setting using pretrained checkpoints, exhibit robust and fair performance across age, ethnicity, and gender.
Subjects
Deepfake Detection
Deepfake Images
Deepfake Survey
Deepfake Detector Evaluation
Generative AI
Latent Difusion Models
