Criminals are increasingly using AI to deceive both people and systems. KYC fraud is a clear example: with a stolen identity document and a convincing deepfake, cybercriminals can assume virtually any identity they choose. John Erik Setsaas has spent more than two decades specialising in digital identity and warns that security risks are escalating rapidly: “It’s already easy to create highly realistic deepfakes, and most organisations are unprepared for what’s coming.”
Early 2026, a cybercriminal managed to open 46 accounts at ABN AMRO bank by using stolen identity documents combined with AI-generated deepfake selfies that fooled the bank’s digital KYC checks. He took IDs off social media and other publicly available sources and used AI tools to generate convincing selfie videos that matched the IDs. For weeks, the bank’s automated onboarding system accepted the applications. The fraud only came to light when the man accidentally paired a female identity document with a male deepfake, triggering a manual review by compliance staff.
The case became one of Europe’s most cited examples of how generative AI can exploit weaknesses in remote identity verification systems, explains Setsaas. He runs Setsaas Trust Advisory and is one of Europe’s most knowledgeable experts on digital identity. “Anyone with bad intentions can execute those intentions now, using publicly available AI tools. AI can even help you with the process and teach you how to commit fraud.”
Digital arms race
KYC processes across Europe are governed by a complex framework of EU laws and directives. However, these regulations rarely prescribe exactly how organisations should operate; instead, they require businesses to implement KYC measures that are considered sufficiently robust. The challenge, according to Setsaas, is defining what “robust enough” actually means.
“There’s no legislation that explicitly states organisations must detect deepfakes. But when a single individual can create 46 separate accounts, it’s clear the KYC process has failed. Stronger deepfake detection should already have been in place.”
Setsaas describes the fight against KYC fraud as a growing digital arms race. “Criminals are using AI to outsmart systems, while defenders are using AI to strengthen them,” he explains. He compares the situation to the evolution of lockpicking: “We continue improving our locks, but thieves continue refining their tools and techniques. The hope is that, eventually, the effort becomes too difficult or too costly, and they move on elsewhere.”
eIDAS: Privacy-by-design
However, Setsaas believes that, in the world of digital identity, criminals are unlikely to simply “move on elsewhere”. “With the rise of digital identity wallets, the data they contain will become far too valuable for criminals to ignore,” he warns. “In the near future, a digital ID wallet could effectively become the key to your entire life. From opening a bank account and renting a property to accessing healthcare or completing a university degree.”
The risk does not begin with how wallets are used. It begins with how they are created. Wallet onboarding will largely happen digitally, and that process is only as secure as the deepfake detection behind it. If someone has access to your photos and videos, they can generate a convincing deepfake and build a wallet in your name before you even know it exists.
To reduce security risks, developers of digital identity wallets are increasingly adopting privacy-by-design principles. In practice, this means organisations such as banks may only gain limited visibility into activity within a user’s wallet. If behavioural or transactional data is not stored, it cannot be stolen, exploited or misused by criminals.
Yet, this approach also creates new tensions. “Banks still need insight into transactions and behavioural patterns,” Setsaas explains. “Not because they’re interested in people’s private lives, but because regulations require them to detect potential money laundering and terrorist financing.”
Privacy-by-design principles in digital ID wallets could significantly limit banks’ ability to carry out those detection and monitoring processes. “I’m all for protecting privacy. But if that means we end up protecting criminals, then we’ve created a different kind of problem.”
Harmonisation across Europe
Challenges like these highlight the urgent need for greater harmonisation across EU regulations, Setsaas says. “We cannot afford a situation where one European law undermines or contradicts another.”
But creating secure and trustworthy digital ID wallets requires more than legislation alone. “European laws and directives are legal frameworks, not technical blueprints,” Setsaas explains. “They define the objectives, but they rarely specify the technical safeguards required to achieve them. That responsibility falls to the market, technology providers and manufacturers.”
What is already clear, however, is that organisations must strengthen their defences against deepfake-driven fraud before the threat escalates further. According to Setsaas, many businesses still underestimate the scale of the challenge. “Companies often claim they have protections in place,” he says. “But when you zoom in, those protections are either insufficient or incomplete. Not because organisations are unwilling to act, but because many still don’t fully understand the threats they are already facing.”
Looking forward: closing the awareness gap
For Setsaas, the starting point is greater awareness. “It’s already easy to create highly realistic deepfakes, and most organisations are unprepared for what’s coming next.”
Identity verification at the point of onboarding is strong, relying on ID documents and facial biometrics to confirm who is creating the wallet. But once the wallet exists, that verification is completely absent. There is no reliable way to confirm whether the wallet is being used by its rightful owner, or by someone who has stolen, hacked, or been given permission to use it. “User and owner must be one and the same. Without that trust, the entire system breaks down.”
Yet urgency around implementing effective technical safeguards against KYC fraud often remains lacking. “There is still a major awareness gap, and we need to close it quickly. The biggest risk of deepfake ID fraud is not taking it seriously enough.”


