top of page

"Any Lawful Purpose": How a Three-Word Phrase Could Open the Door to AI-Powered Mass Surveillance

The Pentagon's new AI deal with OpenAI relies on laws that haven't kept pace with technology — and a government that has repeatedly shown it will exploit every legal loophole available.

What Just Happened

On February 27, 2026, the Trump administration designated Anthropic — one of the world's leading AI companies — a national security "supply chain risk." This designation, typically reserved for foreign adversaries like Chinese telecom companies, was applied to an American company for the first time in U.S. history. The reason? Anthropic refused to remove two explicit prohibitions from its Pentagon contract: no mass domestic surveillance of Americans, and no fully autonomous weapons.

Hours later, rival OpenAI announced it had secured the very contract Anthropic walked away from. CEO Sam Altman claimed the deal contains the same protections Anthropic demanded. But the contract language tells a different story. Where Anthropic insisted on explicit contractual prohibitions that could not be reinterpreted, OpenAI agreed that the Department of War may use its AI "for all lawful purposes, consistent with applicable law."

That phrase — "any lawful purpose" — is doing an enormous amount of work. And if you understand what's currently legal under American surveillance law, it should terrify you.

The Laws That Are Supposed to Protect You Don't

The entire premise of OpenAI's contract is that existing U.S. law will prevent abuse. This assumption collapses under even basic scrutiny.

Right now, in 49 out of 50 states, law enforcement agencies can purchase your precise location data — tracked every few minutes by apps on your phone — from commercial data brokers without a warrant, without a court order, and without your knowledge. Only Montana has closed this loophole. The government doesn't need to compel tech companies to hand over your data. It simply buys it on the open market, sidestepping the Fourth Amendment's central protection: the requirement that police obtain a warrant from a judge before invading your privacy.

The Defense Department has purchased location data collected from prayer apps to monitor Muslim communities. Police departments have purchased data to track racial justice protesters. The FBI conducted 3.4 million warrantless searches of Americans' communications in a single year through FISA Section 702's "backdoor search" loophole. They used it to surveil Black Lives Matter protesters, January 6 suspects, journalists, members of Congress, and 19,000 political donors.

All of this is currently legal. Every bit of it falls under "lawful purpose."

Now imagine giving an AI system — one that can process, cross-reference, and analyze data at a scale no human team could match — access to this same legal framework.

What AI Changes About Surveillance

Before AI, mass surveillance had a natural limiting factor: human attention. Even if the government collected vast amounts of data, someone had to actually look at it. Analysts could only review so many records, listen to so many calls, watch so many camera feeds.

AI eliminates that bottleneck entirely.

As Anthropic itself warned in the blog post that got them blacklisted by the federal government: powerful AI makes it possible to assemble scattered, individually innocuous data into a comprehensive picture of any person's life — automatically and at massive scale. Your location data, your browsing history, your purchase records, your social media activity, your associations, your political donations, your medical visits, your religious practice — each piece seems harmless alone. Together, processed by an AI system, they create a portrait more intimate than any wiretap.

Under current law, the government can legally purchase most of this data. Under the "any lawful purpose" standard in OpenAI's contract, there is no explicit contractual barrier preventing the Pentagon from feeding it into an AI system designed to find patterns, flag anomalies, and identify targets.

This isn't science fiction. This is what the technology can do right now.

The FISA Time Bomb

There's a critical deadline approaching that most Americans don't know about. Section 702 of the Foreign Intelligence Surveillance Act — the law that authorizes the NSA to collect communications of foreigners abroad but routinely sweeps up millions of Americans' emails, texts, and calls as "incidental collection" — is set to expire on April 20, 2026. That's less than two months from now.

Congress must decide whether to reauthorize this surveillance authority, reform it to require warrants for searching Americans' data, or let it lapse. A federal court ruled in early 2025 that these warrantless searches of Americans' communications violate the Fourth Amendment. But the government is appealing, and the program continues.

The timing is not coincidental. The Pentagon is securing AI capabilities just as Congress debates the future of its most powerful surveillance authority. If Section 702 is reauthorized without meaningful reform — and AI tools are now deployed in classified systems — the combination creates a surveillance apparatus that previous generations could not have imagined.

The FBI has already proven it cannot be trusted with this power. It violated its own internal rules hundreds of thousands of times. The FISA Court discovered that FBI personnel used an "advanced filter function" that conducted additional searches beyond those that were logged and reviewed. The Privacy and Civil Liberties Oversight Board, the independent body designed to prevent exactly this kind of abuse, has been effectively gutted.

The Punishment for Saying No

Perhaps the most chilling aspect of this story isn't the contract itself — it's what happened to the company that refused to sign it.

Anthropic didn't refuse to work with the military. It had an existing $200 million Pentagon contract. It supported all lawful uses of AI for national security. It drew the line at exactly two things: mass domestic surveillance and fully autonomous weapons. For that, the President of the United States posted on social media threatening to use "the Full Power of the Presidency" to force compliance, with "major civil and criminal consequences to follow."

The Secretary of War designated the company a "supply chain risk" — a tool designed to protect the military from foreign adversaries — and ordered every defense contractor to cut all business ties with Anthropic. This is a company valued at $380 billion that had just closed a $30 billion funding round. The message to every other technology company in America was unmistakable: comply or be destroyed.

Legal experts across the political spectrum have questioned whether this designation is legally sound. The statute requires that the Pentagon exhaust less intrusive alternatives first. Former Biden administration national security officials have noted the designation applies only to Department of War contracts, not to how other contractors use Anthropic's products for non-military customers. But as one analyst observed: even if Anthropic wins in court, it will take years, and in the meantime, every corporate lawyer at every company with Pentagon exposure will ask the same question — is standing up to the government worth the risk?

That's not a question that should exist in America.

"Lawful" Is Not the Same as "Right"

History is littered with examples of government actions that were perfectly legal and deeply wrong. Japanese internment was legal. COINTELPRO — the FBI's systematic campaign to surveil, infiltrate, and disrupt civil rights organizations — was conducted under existing legal authorities. The NSA's bulk collection of Americans' phone records operated for over a decade before the public even knew it existed, and the government maintained it was legal the entire time.

The question has never been whether the government will follow the law. The question is whether the law is adequate to prevent abuse. And on AI-powered surveillance, the answer is clearly no.

There is no comprehensive federal AI regulation. There is no federal data privacy law. The Electronic Communications Privacy Act, which governs much of digital surveillance, was written in 1986 — before the internet as we know it existed. The data broker loophole allows the government to buy information it would need a warrant to compel. FISA Section 702 has been repeatedly abused despite statutory safeguards. And now, President Trump has signed an executive order directing the Attorney General to challenge state AI regulations deemed to obstruct national competitiveness.

In this legal environment, "any lawful purpose" is a blank check.

What You Can Do

Awareness is the first step, but it cannot be the last. Here is what matters right now:

Contact your congressional representatives about FISA Section 702 before its April 20, 2026 expiration. Demand that any re-authorization include a warrant requirement for searching Americans' communications. The Fourth Amendment Is Not For Sale Act would close the data broker loophole — ask your representatives to support it.

Pay attention to how the Anthropic supply chain risk designation plays out in court. The precedent it sets will determine whether the government can punish any American company that refuses to build surveillance tools.

Support organizations fighting for digital rights and surveillance reform — the Electronic Frontier Foundation, the ACLU, the Brennan Center for Justice, and the Project On Government Oversight have all been doing critical work on these issues.

Talk to people. Share what you learn. The greatest asset any surveillance state has is public indifference. The technology is moving faster than the law, faster than public understanding, and faster than the oversight mechanisms designed to prevent abuse. The only force that has ever corrected that imbalance is an informed public that demands accountability.

This is still America. But keeping it that way requires us to pay attention — and to act — before the systems are built, deployed, and classified beyond our reach.

Sources: OpenAI "Our agreement with the Department of War" (2/28/26) · Anthropic "Statement from Dario Amodei" (2/26/26) · Fortune (2/28/26) · NPR (2/27/26) · ABC News (2/28/26) · Brennan Center for Justice · Project On Government Oversight · Electronic Frontier Foundation · Congressional Research Service Report R48592 · EFF House Judiciary Committee Testimony (12/11/25)

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
  • Facebook
  • Twitter
  • LinkedIn

© 2025 Renegade Entertainment & Media  

Using MailerLite for email automation. By subscribing you confirm you’re 16+ and agree to our Privacy Policy. Unsubscribe anytime.  

Contact: info@renegadeentertainmentandmedia.com | 123 Main St, Jacksonville FL 32220 USA

bottom of page