Your Data
Surveillance, Convenience, and the Cost of "Free"
Previously:
In the introduction article I explain the structure of the articles to come. In the second article; “The Dollar, The Chessboard”, I set the world stage macro-economically and geopolitically and now we talk about privacy.
Before we get into any of it, I want to address something.
You’ve heard this before, probably from someone who said it with full confidence and zero self-awareness: “Well, if you have nothing to hide, you shouldn’t be worried.”
It sounds reasonable on the surface. It’s not.
The argument assumes three things simultaneously: that the people collecting your data are permanently trustworthy, that the rules governing what they can do with it never change, and that you can perfectly predict which parts of your life will someday matter to someone with power over you. That’s a lot to stake your privacy on.
A person fleeing an abusive partner has nothing to hide. A whistleblower exposing corporate fraud has nothing to hide. A small business owner whose client list, pricing strategy, revenue patterns, and vendor relationships are sitting in a database somewhere has nothing to hide. The argument collapses the moment you ask: trustworthy to whom, and for how long?
Your data isn’t just a snapshot of who you are today. It’s infrastructure for whatever decisions someone else might make about you tomorrow.
The Architecture of Knowing
Most people think about data privacy in terms of what they consciously share — what they post, what they fill out, what they agree to. That’s a fraction of the picture.
The data economy runs on passive collection: where you go, how long you stay, what you look at before you close the tab, what your device is doing while you’re not using it. Every app, browser, ISP, and platform is a node in a network that aggregates behavioral signals and packages them for anyone willing to pay — or subpoena. For individuals this looks like targeted ads and eerily accurate recommendations. For businesses, it’s more consequential: your client acquisition patterns, your pricing decisions, your financial position, your vendor relationships — all of it exists in systems you didn’t design, governed by rules you didn’t write.
“Free” tools are not free. They are exchanges where you pay with information instead of money.
The question is whether you understand what you’re trading and what it’s worth to the people receiving it.
Your Government Is Already Selling Your Data
Here’s one most people don’t know, and it lands hard when they do: your state DMV is probably selling your personal information right now.
Legally.
Under a federal law written in 1994.
The Driver’s Privacy Protection Act (passed after a stalker obtained an actress’s home address from California DMV records and murdered her) was intended to protect driver data. What it actually did was create a list of “permissible uses” that opened a back door to a commercial data market. Debt collectors, private investigators, insurance companies, data brokers, and marketing firms all qualify. The data being sold includes your name, address, date of birth, phone number, email address, and vehicle information — the exact details you had no choice but to provide to get a license or register a car.
The numbers are not trivial. Florida made $77 million in a single year selling driver records. Michigan brought in $81 million. New York earns roughly $58 million annually. California pulled in approximately $50 million. Only three states — Delaware, Wisconsin, and Wyoming — offer any meaningful opt-out. In most of the country, you cannot stop it.
The data doesn’t just go to insurers. Companies like LexisNexis and Experian buy it in bulk and merge it with everything else they hold — social media activity, cell-tower location pings, purchase history — to build composite profiles that get resold downstream to marketers, law firms, and technology companies. What starts as a government record becomes private intelligence. And the pipeline has already been misused: DMVs in North Carolina, Virginia, New Jersey, and Florida have acknowledged instances where buyers were cut off after abusing access. The “fix” in most cases was updated paperwork, not structural change.
For business owners the implication runs deeper than personal exposure. If your clients’ contact information and addresses can be obtained legally through a $5 DMV request by anyone with a “permissible use,” the assumption that your customer relationships are private is already compromised before you’ve made a single digital mistake.
Palantir: When Data Infrastructure Becomes Policy
Peter Thiel co-founded Palantir in 2003 with a premise that was ahead of its time: that the real power in a data-rich world isn’t the data itself, but the ability to connect it. Named after the “seeing stones” in Tolkien’s legendarium, Palantir builds software that links disparate databases into unified surveillance platforms — and sells that capability to governments.
The client list is not subtle. Palantir’s documented contracts span the CIA, NSA, FBI, DHS, ICE, the Marine Corps, the Air Force, and law enforcement agencies at the state and local level. Its two flagship products — Gotham, built for government and defense, and Foundry, aimed at commercial clients — operate on the same core logic: ingest data from everywhere, find the patterns, surface the targets.
The most concrete current example is ImmigrationOS, a platform built for ICE under a $30 million contract. The system is designed to provide near real-time tracking of individuals prioritized for deportation, pulling from passport records, Social Security files, IRS tax data, license-plate readers, and Medicaid health data. A separate Palantir tool creates map-based dossiers of potential enforcement targets, complete with a “confidence score” on each person’s current address.
Palantir’s position is that it provides the tools, not the decisions. That framing deserves scrutiny. When a system ingests data indiscriminately — regardless of the accuracy of the underlying records — and generates AI-driven enforcement priorities, the architecture is the decision. Civil liberties organizations have raised a consistent concern: a system designed to locate undocumented individuals, built on databases that don’t reliably distinguish between citizen and non-citizen, is a system that can be pointed at anyone.
The pattern is not unique to immigration enforcement. It’s a template: centralize data across government systems, run it through AI, generate actionable profiles, operationalize the results. The policy goals shift with administrations. The infrastructure persists.
When the Algorithm Gets It Wrong
In July 2025, Angela Lipps, a 50-year-old grandmother from Tennessee, was arrested at her home by U.S. Marshals as a fugitive from North Dakota. She had never been to North Dakota. AI facial recognition software examining surveillance footage from a bank fraud case flagged her as a suspect. A detective reviewed her driver’s license photo and social media and confirmed the match. She spent nearly six months in jail before the case unraveled. The police did not interview her before the arrest. No one checked the alibi. No one verified the physical discrepancies between Lipps and the actual suspect.
This is not an isolated case. A Washington Post investigation documented at least eight Americans wrongfully arrested after facial recognition matches, finding that in each instance investigators skipped fundamental verification steps that would have cleared the suspect before arrest. The technology’s own vendors attach explicit caveats stating that results are “indicative and not definitive” and require further investigation before any action is taken. In at least five of seven wrongful arrest cases reviewed by the ACLU, police had received those explicit warnings — and made the arrests anyway.
Facial recognition systems are also significantly less accurate for women and people of color. The error rate is not a uniform risk. It is a risk that falls disproportionately on specific populations, which means the people most likely to be wrongfully detained are also the people least likely to have the resources to fight it quickly.
The relevance here goes beyond criminal justice. These systems are being integrated into airports, border crossings, and public infrastructure. You interact with them without consent or notice. Your face, cross-referenced against a database you didn’t opt into, becomes a data point in an automated decision process that can detain you for months before anyone asks whether the algorithm was right.
The architecture is the decision.
And when the architecture is wrong, real people spend real time in jail for it.
When Data Becomes the Basis for Designation
In March 2026, a federal jury in Fort Worth convicted eight people on terrorism charges stemming from a July 4, 2025 protest outside an ICE detention facility in Alvarado, Texas. One participant shot and wounded a police officer. That person was convicted of attempted murder.
The other eight were convicted of “providing material support to terrorism” — a charge rooted in what they wore. The prosecution argued that dressing in all-black clothing at the protest constituted material support for a terrorist organization. The terrorism designation itself had been applied by executive order months after the protest occurred, and was applied retroactively to the charges.
I am not here to tell you whether these individuals are guilty of the charges against them. That is for courts and lawyers and history to sort out. What I am here to tell you is what this case reveals about the data and surveillance architecture underneath it.
To bring terrorism charges, the government had to demonstrate affiliation and coordination. That case was built substantially on digital evidence: encrypted messaging apps, communications metadata, social media activity, and “radical pamphlets” seized in home raids. The prosecution specifically noted that defendants used Signal — an encrypted messaging app — as evidence of coordination. The use of a privacy tool was presented as evidence of criminal intent.
This is the precedent worth watching. When the legal definition of a “terrorist organization” can be applied by executive order to a loosely affiliated movement with no formal structure, and when the use of encryption is presented as evidence of guilt, the data you generate — your communications, your location, your associations — takes on a different weight. The legal framework governing how that data can be used against you is not static. It is being rewritten right now, and the infrastructure to act on it has been in place for years.
The Local Version Is Already Here
You don’t need Palantir to experience data-driven overreach. It’s already operating at the local level, quietly, through tools most people have never thought about.
License plate readers — fixed cameras mounted on patrol cars, utility poles, and highway infrastructure — photograph every plate they encounter and log the time, date, and location into a searchable database. In most jurisdictions, there is no requirement that a car be associated with a crime for its movements to be recorded and retained. The data is simply collected, continuously, and stored.
Documented abuses include officers using license plate reader databases to track ex-partners, locate individuals without a warrant or probable cause, and conduct surveillance on people who have committed no crime. In multiple states, the systems have been found to have weak access controls — meaning the data is not only being misused by bad actors inside law enforcement, but is also potentially accessible to anyone who can breach a system that was not designed with security as a priority.
The through-line from the DMV selling your address to a private investigator, to Palantir aggregating your tax and health records, to a license plate reader logging your daily commute, is the same: data collected for one stated purpose, retained indefinitely, used in ways you were never told about and did not consent to. The scale changes. The mechanism doesn’t.
Starlink and the Problem of Private Infrastructure
Most people who use Starlink love it. That’s by design, and the product genuinely delivers. In remote areas, disaster zones, and countries with failing telecoms infrastructure, Starlink provides something that felt impossible a decade ago: fast, reliable internet from almost anywhere on earth. The hardware is elegant, the setup takes fifteen minutes, and the reviews are overwhelmingly positive. When something works that well, skepticism feels ungrateful.
But professional skepticism isn’t about distrust. It’s about following incentives.
There is an economic principle called Rational Choice Theory — the foundational assumption that actors will make decisions that maximize their own benefit given available options. A simpler version: assume the incentive, assume the behavior. You don’t need to believe in bad intentions to apply it. You just need to ask: what is it beneficial for this actor to do, and what stops them from doing it?
Apply that to Starlink. SpaceX operates a global satellite network with physical coverage of nearly every inhabited place on earth. It has the ability to grant or restrict connectivity by geography, by user, and by use case. It collects location data, usage patterns, dish orientation, and network performance from every active terminal. In January 2026, it updated its privacy policy to include training AI models on that data — including sharing with third-party collaborators for their own purposes. The owner of this network simultaneously controls the world’s largest social media platform, holds significant federal government contracts, and has documented personal relationships with heads of state across multiple continents.
Starlink is not a telecom company anymore. It is a political device with a subscriber base.
For the first time in history, a single private actor controls infrastructure capable of influencing the outcome of wars, the flow of information across elections, and the geopolitical positioning of nation-states. This is why I keep connecting intelligence, human sentiment, the dollar, and geopolitics throughout this series — because they are not separate topics. They are the same system viewed from different angles, and Starlink is a thread that runs through all of them.
The Ukraine war made this structural reality visible. What began as emergency connectivity — SpaceX activating service across Ukraine within days of the 2022 invasion — became critical military infrastructure. Ukrainian reconnaissance units used Starlink to relay drone imagery to artillery, collapsing the time between target identification and engagement. In 2023, Musk restricted Starlink coverage near Crimea to prevent a Ukrainian drone attack on Russian naval vessels. That was not a policy decision or a regulatory action. It was a commercial product decision, made unilaterally, that shaped a military outcome. No democratic process, no chain of command.
Separately, researchers from the University of Maryland found they could track the movements of military personnel in Ukraine and Gaza simply by querying Apple’s Wi-Fi positioning system, which maps the location of Starlink terminals as a byproduct of normal operation. Consumer hardware, doing exactly what it was designed to do, became a real-time military intelligence tool. The terminal reveals the position. The position reveals the person.
For civilians and business owners, the stakes look different but the mechanism is the same. Your terminal’s location is known. Your usage patterns are logged. Your data can now train third-party AI models. The question isn’t whether you trust any particular person today — it’s whether you’d bet your business on trusting any single private actor with that much visibility into your operations, indefinitely, across political administrations you cannot predict.
Rational Choice Theory says: if it is beneficial to use the leverage, the leverage will be used. It already has been.
The Pattern Beneath the Examples
The DMV selling your address. Palantir aggregating your health records for enforcement. A grandmother in Tennessee spending six months in jail because an algorithm said so. Protesters convicted partly on evidence of using an encrypted messaging app. A license plate reader logging your commute without cause. A satellite network making unilateral military decisions. These are not isolated stories. They are different expressions of the same structural condition.
Data is being centralized. The systems holding it are increasingly intertwined with government power. The rules governing what can be done with it are being rewritten faster than most people are paying attention to. And the infrastructure to act on it — the databases, the AI, the surveillance tools — was built before the policies governing its use were settled.
This is not a warning about what might happen someday. It is a description of what already exists. The question is not whether you have something to hide. The question is whether you understand the environment you’re operating in, and whether you’re making intentional choices about your exposure within it.
The answer to that question is not paranoia. It is not going off-grid or abandoning the digital economy. It is, and has always been, professional skepticism — the same discipline you’d apply to any other risk your business carries. Understand what you’re working with. Understand what you’re giving up. Make informed decisions about what’s worth protecting.
So, just like in my first article, here are five areas worth planting a flag on now. We’ll go deep on each of them in Article 4.
Your email. Gmail and Outlook scan the content of your messages. Everything you send is readable by the provider and subject to government request. If you handle client information, financial data, or anything sensitive in your business, that information is sitting in a database owned by someone else.
Your browser and search. Chrome and Google Search build a behavioral profile on you over time — every query, every site you visit, every duration logged. The search engine that feels neutral is one of the most sophisticated data collection tools ever built.
Your phone. Location data is sold by carriers and apps. Microphone and camera permissions get granted once and rarely audited again. Your phone is a sensor array that travels everywhere you go — and most of its permissions were never meant to be permanent.
Your financial tools. Payment processors, banking apps, and accounting software hold a complete picture of your revenue, spending, clients, and vendors. Your entire financial pattern is a data asset — one that lives on someone else’s servers under terms of service you agreed to and probably didn’t read.
Your business communications. Slack, Teams, Zoom — anything running on a corporate platform is owned by that platform, not you. Your client conversations, your internal strategy discussions, your financial decisions: all of it is sitting on someone else’s infrastructure, governed by their rules.
Up Next
Article 3 covers the dollar, data privacy, how they tie together, and why the “War” with China isn’t the most important focus. It’s privacy.
When we get to Article 4, Digital Hygiene, we go deep on the practical side of all of this: the specific tools that reduce your exposure, the habits that protect your business and your clients, and how to think about which risks are worth managing first.
The surveillance infrastructure is real. So are the options. We’ll get into both.




