Deepfakes Drive New Wave of Insurance Fraud

News desk: Generative artificial intelligence is reshaping the landscape of insurance fraud, with deepfake technology enabling criminals to produce convincing fake evidence and impersonate individuals at scale. Once largely associated with high-profile celebrity videos, deepfakes now pose a growing threat to property-casualty, life and health insurers.

Insurers are already feeling the impact. UK insurer Admiral reported detecting £86.8 million in fraudulent claims in 2025, a 71% increase from £50.9 million in 2024. The company attributed part of this rise to manipulated images, altered documents and exaggerated claims involving AI-generated material.

Fraudsters are leveraging easily accessible AI tools to generate photos of nonexistent or exaggerated property damage, fake vehicle damage, altered number plates, forged repair documents and fabricated high-value assets. In the UK, Allianz reported a 300% increase in incidents involving manipulated images, videos and documents between 2021-22 and 2022-23, highlighting the rapid growth of digital evidence fraud.

The threat is not confined to images. Voice cloning and deepfake video calls are also being used in social engineering attacks. Criminals impersonate policyholders, beneficiaries, executives, or claims officials to request payments, change policy details, or access sensitive information.

One widely reported case involved UK engineering group Arup, which lost about HK$200 million (approximately $25 million) after fraudsters, used digitally cloned voices and images in a video conference scam in Hong Kong. Although this case occurred outside the insurance sector, it demonstrates how deepfake technology can undermine trust-based verification processes.

Fraudsters are increasingly combining deepfakes with synthetic identities, forged documents and stolen personal data, making detection more challenging. A claim may appear to have supporting photos, identity documents, voice confirmation and written evidence, even though much of it has been artificially created or altered.

Insurers are responding by enhancing digital fraud controls. Many companies are investing in AI-based detection tools, digital media forensics, biometric checks, metadata analysis and real-time fraud scoring at the first notice of loss. These systems can analyse lighting, facial movements, image inconsistencies; file history and other signs of AI manipulation.

Collaboration across the industry is also growing. Insurance crime bodies, data analytics firms and carriers are sharing research on AI-enabled fraud. A recent Verisk study, highlighted by the National Insurance Crime Bureau, found that 36% of consumers would consider digitally altering a claim image or document, even if it violated insurer rules.

The rise of deepfake fraud has broader implications. Fraud losses can raise costs for insurers, potentially leading to higher premiums for honest policyholders. At the same time, legitimate claims may face closer scrutiny and longer review periods as insurers separate real evidence from manipulated material.

Regulators are also becoming more focused. In the United States, the proposed Preventing Deep Fake Scams Act would create a task force to study AI-related fraud risks in financial services and recommend consumer protection measures. In Europe, the EU AI Act includes transparency obligations for certain AI-generated or manipulated deepfake content, with related rules expected to become more important from 2026.

Industry experts anticipate that the threat will grow as generative AI tools become more realistic, affordable and accessible. Insurers that invest in detection technology, employee training, stronger identity checks and updated claims procedures will be better positioned to manage the risks.

For consumers, the advice is clear: verify suspicious calls, emails, or payment requests through official channels and avoid sharing sensitive information unless the source is confirmed.

As fraud evolves from paper-based schemes to technology-driven operations, the insurance industry’s ability to adapt will be crucial in mitigating the impact of deepfakes on insurers, customers and the broader market.