In the wake of the USA FREEDOM Act of 2015, Congress attempted to recalibrate the balance between national security and civil liberties after more than a decade of expansive surveillance conducted under the USA PATRIOT Act. That recalibration offers a useful lens for understanding today’s growing conflict between the federal government and frontier artificial intelligence developers—particularly as it relates to mass data collection, transparency, and undefined claims of national security risk.
A recent federal lawsuit filed by Anthropic v. Department of War, complaint underscores how familiar this moment feels to anyone who remembers the post‑9/11 surveillance era. Then, as now, technological capability raced ahead of statutory guardrails, and the government asserted broad national security justifications without clearly articulating their scope or limits.
The PATRIOT Act Era: Speed First, Oversight Later
After September 11, 2001, the PATRIOT Act dramatically expanded the federal government’s surveillance authorities. One of the most controversial outcomes was the bulk collection of Americans’ telephone metadata under Section 215 of FISA—collection that occurred without individualized suspicion. Oversight was limited, secrecy was the norm, and judicial review largely occurred within the closed confines of the FISA Court.
By 2015, bipartisan concern had crystallized around the idea that bulk collection itself—not merely abuse—posed a structural threat to civil liberties. The USA FREEDOM Act ended bulk telephony metadata collection, replacing it with a targeted system that required the government to demonstrate a specific nexus and reasonable suspicion before seeking records. Transparency obligations were expanded, and adversarial participation in the FISA Court was modestly increased.
The core lesson was not that surveillance is illegitimate—but that scale matters, and unchecked aggregation can itself become the harm.
Artificial Intelligence and the New “Bulk Collection” Problem
Large language models invert the old surveillance model. Instead of collecting raw records first and analyzing later, AI systems are trained on massive datasets that already encode patterns, relationships, and inferences about human behavior. As Anthropic_v_Dept_of_War_Complaint repeatedly emphasizes, these systems are uniquely capable of accelerating and automating analysis at a scale never contemplated by existing surveillance frameworks.
This creates a functional parallel to the pre‑FREEDOM Act world:
- Bulk phone records → bulk trained representations
- Metadata analysis → automated inference
- Closed‑door oversight → opaque executive determinations
The concern articulated in the complaint is not that AI models are being used today for unlawful surveillance, but that their architecture makes mass surveillance trivially scalable tomorrow, absent meaningful constraints.
Undefined “National Security Threats” and Executive Discretion
A striking feature of the current dispute is how little specificity accompanies the government’s designation of AI systems as a “national security risk.” The complaint notes that the designation was issued without a technical explanation of how the model poses such a threat, even as the same systems continued to be used for classified and operational purposes.
This echoes the PATRIOT Act era, where courts and Congress later discovered that the breadth of authority, not the intent of officials, was the problem. When national security claims are undefined, they become elastic—capable of justifying mutually contradictory actions.
The USA FREEDOM Act responded to that elasticity by narrowing discretion:
- Replacing bulk authority with targeted collection
- Requiring articulable standards
- Mandating institutional transparency
Those same design principles are conspicuously absent from today’s AI governance landscape.
The Federal Preemption Problem
Compounding the issue is the current federal posture toward AI regulation. The Executive approach emphasizes rapid adoption and federal primacy, including a moratorium on state‑level AI laws. This resembles the early post‑9/11 environment, where urgency crowded out federalism and experimentation in favor of centralized control.
Historically, however, state experimentation has been a critical incubator for privacy and technology governance. The absence of parallel state safeguards places even greater pressure on federal institutions to self‑regulate—an approach that experience suggests is insufficient.
Irreparable Harm, Then and Now
One unresolved tension in the lawsuit is the claim of irreparable harm. While the complaint emphasizes reputational damage, contract loss, and chilling effects on speech, it does not quantify financial loss with precision. This too has precedent. Courts evaluating post‑9/11 surveillance programs often struggled to measure harm when the injury was structural, probabilistic, or reputational rather than transactional. Yet, Congress ultimately recognized that some harms are irreparable precisely because they are systemic.
A Familiar Crossroads
The USA FREEDOM Act did not end surveillance, nor did it eliminate national security risks. What it did was recognize that technological power without procedural limits erodes legitimacy over time.
AI presents the same choice. The question is not whether the government may use advanced tools, but whether it can do so without recreating the very conditions that forced reform a decade ago. If history is a guide, transparency, targeting, and articulated standards are not obstacles to security—they are prerequisites for sustaining it.
