
Power concedes nothing without a demand. It never did and it never will.
Frederick Douglass, “West India Emancipation” speech, 1857
The argument so far has been abstract. This chapter is the opposite of abstract. It is a list of specific decisions being finalized right now, in 2026, that meet the criteria of the previous two chapters. Decisions that affect hundreds of millions or billions of people. Decisions being made without legislation, without public consultation, and without a lever the public can reach in time. Each case stands on its own. Taken together, they are evidence that the structural diagnosis in The Capture of the Corrective Institutions and Democracy Is Jurisdictional. Architecture Is Not. is not a prediction. It is a description of what happened while the attention cycle was looking at something else.
One framing note before the cases. The mechanism, a phone call from a regulator to a payment network, an app specification published by a standards body, a software update shipped before a legislative debate, has been used against targets on every political orientation in the last fifteen years. The mechanism is older than any of the parties that have used it. It does not ask the targets for their voter registration.
The EU age verification app. On April 14, 2026, the European Commission unveiled its Digital Age Verification App. Six member states, including France, Spain, and Denmark, entered pilot phase. The Commission’s framing is that the app is privacy-preserving. A local wallet-style verifier, not a centralized surveillance ledger. That framing is accurate to the current design. The receipt is not about what the app is. It is about what the scaffolding around the app makes possible.
Within forty-eight hours of launch, Paul Moore, a UK-based security consultant, demonstrated a full authentication bypass in under two minutes. His demonstration video surpassed 2.6 million views. Moore’s analysis, corroborated by a separate March 2026 security review, identified a set of architectural choices that should not have shipped. The user-created PIN is encrypted and stored in a local file called shared_prefs. The encrypted PIN is not cryptographically tied to the identity vault holding the verification credentials. The encryption is editable. An attacker with physical access to the device can delete the PinEnc and PinIV values from shared_prefs, restart the app, and enter a new PIN. The rate-limiting counter that prevents repeated PIN guessing is stored as a simple integer and can be reset to zero. Biometric authentication is a boolean that can be flipped to false. The issuer component cannot verify that passport verification actually occurred on the device.
Facial images extracted from identity documents are saved as unencrypted files that may remain on the device if verification fails. Selfie images used for verification are stored and never deleted. Directly conflicting with the app’s public claim that it does not store personal data.
As of April 17, 2026, the European Commission has issued no patch and no public response.
The argument this case supports is not that the app is buggy, though it is. The argument is about scaffolding. The Electronic Frontier Foundation has warned since 2025 that the Commission rushed the app out while creating infrastructure that could be repurposed for other identity checks. Extending the app to verify employment status, criminal history, or immigration status does not require a technical rebuild. It requires a policy decision. The Commission has already announced plans to push national versions of the app into the EU’s digital identity wallets during 2026.
The receipt is not “the EU is building a surveillance ledger.” The receipt is that the scaffolding for a national digital identity infrastructure has been deployed in pilot form, with its first version demonstrating every failure mode cryptographers warned about in the design phase, and that the scaffolding is now in the field before the European Parliament has debated the mission-creep provisions that will determine what the app is eventually allowed to check. The gap between what the app currently does and what the app could be configured to do is one policy memo, issued at an institutional layer the ballot does not reach.
The digital euro. The European Central Bank is proceeding on the assumption that EU co-legislators will adopt the digital euro Regulation during 2026, with a potential first issuance during 2029. On May 5, 2026, the European Parliament’s ECON committee will vote on the ECB’s proposals. The European Council already approved them in December 2025.
Three statements from 2025 establish the design intent. In October 2025, the ECB stated that acceptance of the digital euro would be mandatory, and that payment providers would be required to support the digital euro app. In November 2025, the ECB said the digital euro was needed to combat non-European payment services in the private sector. In December 2025, European Parliament member Aurore Lalucq told the European press: “Let me be clear: anyone who opposes the digital euro is going against the euro and the European Union.”
Each of these statements, read individually, is a normal institutional communication. Read together, they describe the full design. A mandatory-acceptance currency, distributed through infrastructure the ECB requires private payment providers to support, framed so that opposition is a loyalty test against the political project. That is not a neutral payment rail. That is a programmable monetary instrument whose default is identity-tied and whose adoption is not optional. And the loyalty-test framing is the Every System of Control Needs a Moral Story move the book has already named.
The digital euro is not an isolated European phenomenon. 134 countries representing roughly 98% of global GDP are exploring CBDCs. Eleven have launched their own. The digital euro is one specimen of a global infrastructural shift being finalized under central bank authority, without ballot-level review in any jurisdiction building one.
The disagreement about whether the digital euro is desirable is a legitimate political debate. The book’s argument is narrower. A consequential monetary-architecture decision is being finalized at an institutional layer the ballot does not reach, on a timeline no administration will be in office to account for when the consequences arrive.
Debanking without court orders. The cases below rest on the work of named reporters, named courts, and named legislative committees. Glenn Greenwald and Laura Poitras, on the Snowden disclosures. Matt Taibbi and Bari Weiss, on the Twitter Files. The Federal Court of Canada and the Federal Court of Appeal, on the 2022 Emergencies Act invocation. The House Oversight Committee, on Operation Choke Point. The reader does not have to trust the author. The reader can verify.
The Choke Point established the mechanism. This case adds four receipts. Specific, dated, named, and, in one case, court-validated.
In December 2010, the major card networks and money-transfer processors cut off donation processing to WikiLeaks within days of the organization publishing U.S. State Department cables. The blockade was not ordered by any court. No WikiLeaks-affiliated entity was charged with a crime at the time of the cut-off. The networks acted on informal pressure. The blockade remained in place for years; an Icelandic court ultimately ordered the Icelandic acquirer to resume processing in 2013. This is the earliest well-documented modern case of a private payment network being used as an ad hoc judicial instrument against an organization whose speech was politically inconvenient.
In 2013, the U.S. Department of Justice launched what it called Operation Choke Point, coordinated through the Financial Fraud Enforcement Task Force. The mechanism was regulatory pressure, not criminal charges. The Federal Deposit Insurance Corporation issued guidance classifying certain industries as presenting heightened risk. A designation that implied banks maintaining accounts in those industries would face closer regulatory examination. The industries listed included payday lenders, firearms dealers, ammunition retailers, fireworks sellers, and coin dealers. Each was operating legally. None was accused of fraud. The effect was that banks, responding to the implied threat of supervisory scrutiny, terminated accounts with businesses in those categories. Without notice, and without the businesses having any legal recourse to compel reinstatement. The businesses learned their accounts were closed from their banks, not from any court or government agency.
The House Oversight Committee, after a sustained investigation, released a report in 2014 finding that the program was designed to harm entire industries rather than isolate specific fraudulent actors, and that the Department was using its supervisory relationship with regulated banks to achieve outcomes it had no legal authority to compel directly. The DOJ officially terminated the program in August 2017. The FDIC had already issued revised guidance in 2015 stating that banks should not terminate accounts based solely on industry classification. Neither action restored the accounts that had been closed.
In February 2022, Canadian financial institutions froze approximately 257 accounts of people and businesses involved in the Freedom Convoy protests, holding roughly $7.8 million. The freezes were executed under the federal Emergencies Act, on lists provided by the RCMP. The Canadian Bankers Association later told Parliament that a small number of additional accounts were frozen on banks’ own risk-based reviews, without any RCMP-provided list. In January 2024, Justice Richard Mosley of the Federal Court ruled that the government’s decision to invoke the Emergencies Act fell short of the statute’s requirements and infringed the Charter. He wrote: “governmental action that results in the content of a bank account being unavailable to the owner of the said account would be understood by most members of the public to be a ‘seizure’ of that account.” He found that the failure to require any objective standard be satisfied before the accounts were frozen breached Section 8 of the Charter, and that the breach was not minimally impairing and therefore not justified under Section 1.
In January 2026, the Federal Court of Appeal dismissed the government’s appeal. The three-judge panel concluded that the protests “fell well short of a threat to national security” and that invoking the Emergencies Act was unreasonable and ultra vires. CSIS Director David Vigneault had testified that he supported invoking the Act even though he did not believe the Freedom Convoy met his own agency’s definition of a national security threat. The ruling is court-validated; the bank freezes were found, on appeal, to be the product of an emergency declaration that had no legal basis.
In 2023, NatWest-owned Coutts in the United Kingdom closed the accounts of Nigel Farage. A 40-page internal dossier, prepared for Coutts’ Wealth Reputational Risk Committee, described Farage as “a disingenuous grifter” whose public stances posed a “significant reputational risk.” The dossier became public through a subject access request. NatWest CEO Dame Alison Rose resigned. NatWest paid Farage an undisclosed settlement in 2025. After the episode, the UK Financial Conduct Authority conducted a review and concluded that banks were not primarily closing accounts based on customers’ political views. The review’s methodology, asking the banks themselves whether they had debanked anyone for political reasons, was publicly criticized by consumer advocates and some regulators. Both findings should be cited together. The juxtaposition is more informative than either alone.
The four cases come from three jurisdictions, span more than a decade, and represent directions of political pressure that do not resolve into a single coalition. WikiLeaks: a Democratic administration, informal pressure on private networks, no charges filed. Operation Choke Point: a Democratic administration, regulatory pressure on banks, legal businesses terminated without recourse. Freedom Convoy: a Liberal government, emergency powers, court-validated as unlawful on appeal. Nigel Farage: a private bank acting on reputational grounds, no government order, CEO resigned. The mechanism does not care about the politics of the target. It cares about being operational. Every coalition that has held power in a country with a centralized payment network has eventually used it against whichever target was inconvenient at the moment it was holding the phone.
Chat Control. The European Union’s Chat Control proposal, formally the Regulation to Prevent and Combat Child Sexual Abuse, has been reintroduced under various names and redrafts since 2022. Each draft has required some form of client-side scanning: an obligation to build into every messaging app the capacity to read the user’s messages before they are encrypted.
On March 26, 2026, the European Parliament voted 311–228 to reject the extension of the Chat Control 1.0 ePrivacy derogation. The legal mechanism by which Google, Meta, Microsoft, and TikTok had been voluntarily scanning private messages for child sexual abuse material. The derogation expired April 3, 2026.
The technical data on voluntary scanning is worth recording. Reports dropped fifty percent between 2022 and 2025. Only thirty-six percent of new reports originate from chat scanning; the rest come from hosted-content scanning. The false-positive rate on automated image assessment is thirteen to twenty percent. Germany’s federal police (BKA) found that nearly half the reports received were criminally irrelevant. Among German suspects flagged, roughly forty percent were minors themselves, often engaged in consensual sexting without any criminal intent.
What the receipt documents is narrower. The voluntary scanning did not produce a reliable signal. The agencies receiving the reports say it generates more noise than signal. The Commission is continuing to pursue a mandatory version of the same mechanism.
Patrick Breyer, formerly a Member of the European Parliament, has described the current trilogue text as a back-door revival. The new draft obliges providers to take “all appropriate risk mitigation measures” to ensure safety. Wording Breyer argues effectively introduces an indirect obligation to scan content. “Following loud public protests, several member states, including Germany, the Netherlands, Poland, and Austria, said ‘No’ to indiscriminate Chat Control. Now it’s coming back through the back door disguised, more dangerous, and more comprehensive than ever.”
The proposal also introduces mandatory age verification in two places. First, when users want to download certain apps. Messaging services, games with integrated chats, and social media platforms classified as high-risk for distribution of CSAM or grooming. Second, before users can access those services or specific features within them. That linkage hands directly off to the next case. The infrastructure is becoming one infrastructure.
Age assurance and the paper shield. The UK Online Safety Act entered force on July 25, 2025, requiring online platforms with adult content to implement “highly effective” age checks. Penalties for non-compliance include fines of up to £18 million or ten percent of global turnover, and court orders requiring internet service providers to block access to non-compliant services. Australia’s Age Assurance Framework requires platforms to verify the age of their users. Age-assurance regimes in the UK, Australia, the EU Chat Control linkage, and a patchwork of U.S. state statutes all require the same architectural change. Every platform serving users in the jurisdiction must add an identity-verification step before content access. The records are held, usually by third-party vendors.
Compliance requirements generate identity databases that become breach targets. A paper shield that defers the privacy cost from the regulator who imposed it to the individual user. The 2025–2026 receipts are not rhetorical.
In October 2025, Discord disclosed that attackers had accessed approximately 70,000 users’ government IDs, selfies, and other sensitive information after compromising a third-party customer support system used for age verification. The IDs were held because the age-verification regime required them.
In February 2026, researchers found that Persona, a major identity-verification vendor used across multiple platforms including Discord, had front-end code accessible on the open internet. Nearly 2,500 files were discoverable on a U.S. government-authorized endpoint. The files revealed that Persona performs 269 distinct verification checks, including facial recognition against watchlists, screening against lists of politically exposed persons, and scanning for “adverse media” across fourteen categories including terrorism and espionage. Users who underwent Persona’s verification to access mainstream platforms were not told that their identities were being run against counter-terrorism and PEP lists in the process.
The Proton analysis of the Discord breach stated the point cleanly: “There has never been any reason to suppose that the uniquely sensitive age verification data would be immune from such leaks, a point dramatically proven by this incident.”
The receipt is simple. Every piece of infrastructure compelled by age-assurance, Chat Control, or similar requirements generates a database. Every database becomes a target. Every breach transfers the cost of the compliance regime from the regulator who imposed it to the individual user who was required to submit their identity document. The compliance regime is the privacy breach, deferred.
The receipts are not exhaustive. They are representative. Eight of them, across five domains, each a specimen of a different aspect of the same pattern.
In none of these cases is the ballot the active instrument. In none of them does the corrective institution from The Capture of the Corrective Institutions arrive in time. In each of them, the design decision is already being made, or already made, while the attention cycle is fixed on a different story.
Each case, taken alone, has a defense. The EU app is a buggy first version. The digital euro is legitimate monetary policy; disagree at the ballot. WikiLeaks was a national security matter. Operation Choke Point was an overreach that was, in the end, walked back. The Freedom Convoy was a public-order emergency, and the courts did, in the end, rule against the government. Coutts was a private bank exercising commercial judgment. Chat Control and age assurance are about children. Each defense is plausible against the case it answers.
None survives the case next to it.
The bug-and-patch defense does not reach a 2010 blockade. The national-security defense does not reach a coin dealer. The private-judgment defense does not reach an emergency declaration. The think-of-the-children defense does not reach a programmable currency.
One case is a mistake. Two are a pattern.
Across four jurisdictions, two decades, and every direction of political pressure, this is the design.
The phone, as The Choke Point argued, was always going to get used. The party currently holding it will not always be in office. The next party will inherit it. The question is not who uses it. The question is whether it should exist at all.
To ask whether it should exist is to ask which instrument can make it not exist. The older instruments have bounded reach. A vote moves the laws of a jurisdiction. A regulator moves a corporate entity with a registered office. A court moves what a court can enforce. Media moves what attention will hold. Each does real work. None reaches the layer where the rails themselves are specified.
A published specification is a different kind of object. Once published, it cannot be unpublished. Cryptographic primitives are statements about mathematics. Verifiable computation produces a result a skeptic can check without trusting the person who ran it. These objects do not draw their reach from a jurisdiction. They draw it from being specifications.
The receipts above describe a single layer at which identity, payment, memory, and speech are being fused. The mechanism is global because the rails are global. The response that can reach the mechanism has to operate at the layer the mechanism does. That is a description of where the reach is, not a moral claim about which lever is best.