Complete archive of AI fraud intelligence from the SAF National AI Threat Center
Corporate executives are falling victim to deepfake video calls that convincingly impersonate CEOs and senior management, resulting in fraudulent wire transfers.
Criminals are using AI voice cloning technology to impersonate family members in distress, demanding immediate money transfers for fake emergencies.
Advanced AI techniques are being used to spoof fingerprint and facial recognition systems, compromising mobile banking and authentication security.
Cybercriminals are using AI to generate highly personalized phishing emails targeting elderly individuals, exploiting their specific concerns and communication patterns.
Law enforcement has uncovered sophisticated crime rings using AI to generate synthetic identities for large-scale fraud, combining real and fake information to bypass verification systems.
Sophisticated AI-driven interactive voice response systems are impersonating major banks, tricking customers into revealing account credentials and one-time passwords.
Criminal networks are using AI-generated photographs of non-existent people to create convincing fake profiles on dating apps and social media, facilitating romance scams.
Scammers are placing fake QR codes on parking meters, gas pumps, and other public payment systems to steal financial information and payment data.
Deep dives into AI scam tactics, statistics, and emerging threats
Read Latest Articles →Share your experience to help protect others from AI-powered fraud
Submit Report →Explore tools to detect deepfakes, voice clones, and AI-generated content
Browse Tools →