← Back to Blog

The Science Behind Adversarial Verification (It's Funny and It's Real)

Traditional CAPTCHAs ask: "Can you solve a puzzle a machine can't?" This was a fine idea in 2004 and a terrible one now, because machines can solve all of them. Adversarial verification asks a completely different question: "Can you do something that a specific type of threat actor physically cannot do without going to prison?" It sounds unhinged. It is also mathematically sound. Let us explain.

Jurisdiction: The Firewall Nobody Thought Of

Here's something every nation-state cyber operation has in common: the people doing the hacking live under a legal system. North Korean hackers work in government facilities with cameras everywhere. Russian cyber units operate within FSB and GRU command structures. Chinese APT groups sit inside institutions subject to party oversight. These are not freelancers working from coffee shops. They are employees. With HR departments. And surveillance.

Now here's the fun part. Defacing the portrait of a sitting head of state is a criminal offense in all of these countries. In North Korea, it carries the death penalty (not an exaggeration, we wish it were). In China, "picking quarrels and provoking trouble" statutes cover perceived insults to leadership, a charge so vague it basically means "we didn't like what you did." In Russia, laws against disrespecting state symbols were strengthened in 2019 because apparently it was becoming a problem. The penalties are real. The monitoring is extensive. The vibes are extremely not chill.

The Trap

This is the mechanism EVANDALIZE exploits. Picture a state-sponsored operative at their workstation in Pyongyang. Screen monitoring is active. Network logs are recording. A supervisor is probably watching. Now imagine this person needs to draw a funny hat on Kim Jong Un to complete a CAPTCHA. They can't. Not because they lack the motor skills. Because doing so on a monitored government computer is functionally a suicide note.

The constraint is structural, not technical. You can't bot your way around it. You can't throw GPUs at it. No amount of compute solves the problem that your government will put you in a labor camp for doodling on the Dear Leader's forehead. This is, as far as we know, the only security mechanism where adding more computing power makes exactly zero difference.

Three Design Principles

Asymmetric difficulty. Drawing on a photo takes you three seconds and is mildly entertaining. For a state operative, the same action carries existential consequences. This gap cannot be closed by better algorithms. It is structural. It is permanent. It is also pretty funny.

Unforgeable intent signal. The act of defacement IS the proof. There's no token to steal, no score to manipulate. You either drew on the dictator or you didn't. Delegating it to a bot defeats the entire purpose, because the security model requires a human willing to do the thing. It's beautifully simple in a way that makes security researchers uncomfortable.

Jurisdiction-locked challenges. Different threats get different portraits. Worried about DPRK operations? The challenge shows North Korean leadership. Russian threat actors? Putin appears. The system maps challenges to specific regimes so the jurisdictional barrier is precise. The lock fits the key.

This Is Real Security Research Disguised as a Bit

We get it. "Draw on the dictator" sounds like a joke. It is a joke. It is also a genuine advance in verification methodology. The academic term would be "compliance-based adversarial verification" but we think "EVANDALIZE" is funnier and easier to put on a t-shirt. Read the API docs or visit the dashboard to integrate it. We promise the code is more serious than the branding.

← Back to Blog