Back to Blog

You Cannot Detect Your Way Out of the Deepfake Problem

Every board conversation about deepfakes lands on the same question: can we detect them? It is the wrong question. A durable defense is a product, operational, and organizational decision long before it is a model decision.

Views expressed are personal and do not represent any employer, partner, or client.

Every board conversation about deepfakes eventually lands on the same question. Can we detect them?

It is the wrong question. Detection is a capability. Defense is a strategy. Financial institutions that treat the two as the same thing are setting themselves up for a problem no model, however good, will solve.

I have spent most of the last decade building fraud and identity products in financial services. More recently, almost every customer conversation I have had has eventually turned to synthetic media. The tone has shifted from curiosity to urgency. Deepfake-enabled fraud is no longer hypothetical. Finance clerks have been socially engineered out of tens of millions of dollars in a single video call. Synthetic identities built with generative tools are clearing onboarding checks that were considered state of the art three years ago. The cost of producing convincing fake media has collapsed. The volume has exploded.

And yet the industry response has largely been to buy a detector.

That is the gap I want to close in this post. Not by arguing that detection does not matter, because it does. But by making the case that detection, on its own, is a losing strategy. A durable deepfake defense is a product decision, an operational decision, and an organizational decision before it is ever a model decision.

The detection arms race is structurally unwinnable

The first thing to understand about deepfake detection is that the economics do not favor the defender.

Generators improve continuously. New model architectures ship every few months. The open-source ecosystem accelerates every release. A detector trained on the artifacts of last quarter's generation technology will underperform against next quarter's output. Detection is always catching up.

This is not a criticism of the teams building detection technology. I am one of them, so I would know. It is a structural reality of the problem. Any defense that depends primarily on distinguishing real from synthetic at the pixel level is running a race where the finish line keeps moving.

I am not arguing that detection is useless. Detection is necessary. It is the floor. It catches the low-effort attacks and raises the cost of the high-effort ones. But treating detection as the ceiling is how institutions end up with a dashboard that looks green while losses climb.

Deepfake defense is not a single decision

The institutions that are thinking about this well have stopped framing deepfake defense as a product you buy. They are framing it as a layered posture that spans three surfaces.

The identity surface

Before a deepfake can be used against you, someone has to be able to present themselves as a customer, an employee, or an executive. This is where identity proofing, document verification, and liveness detection matter. But the framing has to shift. Identity verification is no longer a point-in-time check at account opening. It is a continuous signal. The question is not just whether this person was verified six months ago. It is whether the person on the other end of this transaction is the same person who was verified, behaving in ways consistent with their history.

Synthetic identity fraud and deepfake-enabled account takeover both exploit the assumption that identity is verified once and then trusted. Treating identity as continuous is the most important architectural shift a financial institution can make right now.

The transaction surface

The Hong Kong finance clerk case (reported by CNN) is instructive not because of the deepfake, but because of the control environment. A single employee on a video call was able to authorize tens of millions of dollars in transfers. The deepfake was the attack. The control failure was the authorization architecture.

No serious defense against deepfake-enabled social engineering can live entirely at the point of detection. It has to include dual-approval controls on material transactions, out-of-band verification for high-value actions, and pre-shared authentication patterns that do not depend on any single communication channel. Layered verification is now mandatory. The institutions that will absorb the next generation of deepfake attacks are the ones whose controls do not collapse when the video on the call looks perfect.

The behavioral surface

Behavior is the hardest thing for a generator to fake convincingly over time. Device signals, session patterns, typing cadence, transaction rhythms, and relationship graphs are all inputs that are difficult to spoof at scale. A real customer has a history. A synthetic identity does not, at least not one that withstands scrutiny. A deepfake of an executive can look and sound right on a ten-minute call. It cannot replicate the last eighteen months of that executive's activity across systems.

The institutions getting this right are investing in behavioral intelligence not as a fraud tool, but as the connective tissue that holds the identity and transaction surfaces together. When the face is perfect and the voice is perfect, behavior is what is left.

The organizational failure pattern

Every financial institution I have spent time with has people who are smart about deepfake risk. The problem is usually not awareness. The problem is ownership.

Deepfake defense crosses at least four organizational boundaries. Identity sits in one team. Transaction monitoring sits in another. Employee-facing social engineering defense sits with information security. Customer-facing social engineering defense sits with fraud or contact center operations. Each of these teams buys tools, writes policies, and reports metrics independently. None of them owns the end-to-end posture.

The result is a gap that attackers exploit. The attacker does not care which team owns which surface. They care which surface is weakest.

The most important organizational move a financial institution can make is to name a single accountable owner for synthetic media risk. Not a committee. An owner.

Someone whose job is to understand the posture across identity, transaction, and behavioral surfaces and to make investment tradeoffs across them. Without that role, every investment optimizes locally and the overall posture drifts.

What to actually do in 2026

If I were advising a financial institution on where to spend deepfake defense dollars this year, I would frame it as five decisions. These are not philosophical. They are concrete investment decisions a leadership team can make this quarter.

Treat identity as continuous, not point-in-time. The onboarding check is not the control. The ongoing relationship is the control.

Invest in layered authorization for material transactions. Any single communication channel, including live video, should be assumed to be spoofable.

Build or buy behavioral intelligence that spans the customer and employee journey. This is the signal that holds up when the media is perfect.

Buy detection, but treat it as a floor. Do not build your strategy on it. Build your strategy on the architecture around it.

Name a single owner for synthetic media risk who can make investment tradeoffs across identity, transaction, and behavioral surfaces.

The question behind the question

When a board asks whether you can detect deepfakes, what they are really asking is whether you are ready for an attack that did not exist two years ago and is now commonplace.

The honest answer is that detection alone will not get you there. A posture built around identity continuity, transactional layering, behavioral intelligence, and a clear owner will.


Shyam Menon is a product leader specializing in fraud and identity in financial services. This is one of a series of framework posts on how to think about fraud prevention, identity, and AI products in regulated industries. He writes at shyammenon.com.