Chaos Disguised as Control: How AI Surveillance Systems Are Failing and Harming Vulnerable People
- Gonzo
- 1 day ago
- 10 min read
From facial recognition checkpoints in occupied Palestine to immigration enforcement in American cities, the same surveillance technologies promise precision and safety. What they deliver is something far more unpredictable — and far more dangerous.
An analysis drawing on Amnesty International, NIST, and James Gleick's Chaos: Making a New Science
Imagine passing through a gate near your home. A camera scans your face. A computer makes a decision. And suddenly, without explanation, you are not allowed through — not allowed to return to your family. This is not a hypothetical. It is a daily reality for many Palestinians living under Israeli military occupation, where automated facial recognition systems govern movement in ways that feel, to those subject to them, entirely arbitrary.
Now consider a different scene: an undocumented immigrant in an American city posts a photo on social media. An algorithm flags the post. A risk score rises in a federal database. An enforcement action follows. The connection between that one photograph and the consequences that unfold afterward is real — but almost impossible to predict in advance.
These two scenarios, separated by thousands of miles and very different political contexts, share something essential: they are both products of AI surveillance systems that claim to bring order and precision to complex human situations. And they both illustrate why that claim is, at its core, an illusion.
To understand why, it helps to borrow a concept from an unexpected place: mathematics.
What chaos theory tells us about surveillance
In 1987, science writer James Gleick published Chaos: Making a New Science, a landmark book explaining a new field of mathematics. The central insight of chaos theory is deceptively simple: in complex systems, tiny changes at the beginning can produce wildly unpredictable outcomes later. The famous image is a butterfly flapping its wings in Brazil and setting off a chain of atmospheric events that eventually produces a tornado in Texas.
Gleick was writing about weather systems and fluid dynamics. But his framework applies with striking force to the AI surveillance tools that governments are increasingly deploying against vulnerable populations. These systems — fed by vast quantities of data, processed by opaque algorithms, and making consequential decisions about real human lives — are not the orderly, precise instruments their proponents describe. They are, in the language of chaos theory, nonlinear systems sensitive to initial conditions, prone to error amplification, and fundamentally resistant to the kind of predictive control authorities claim to achieve.
In plain terms: small inputs produce large, unpredictable consequences. And the consequences fall hardest on those with the least power to resist them.
Blue Wolf and Red Wolf: surveillance as a game
The Israeli military's two primary surveillance tools in the occupied West Bank are known as Blue Wolf and Red Wolf. Blue Wolf is a smartphone application used by soldiers to photograph Palestinians and compare their faces against a vast biometric database called "Wolf Pack." The system assigns each person a color-coded status — green for clearance, yellow for further questioning, red for detention — based entirely on automated facial recognition, with no human review of the algorithm's judgment.
Red Wolf goes further. Deployed at checkpoints in Hebron, it scans faces automatically as people pass through, without their knowledge or consent. If the system fails to find a match for someone in the database, that person is denied passage — potentially prevented from returning to their own home.
"They can tell you that your name is not in the database, as simple as that, and then you're not allowed to pass through to your house."
— EYAD, A RESIDENT OF TEL RUMEIDA, HEBRON
The system's design amplifies its potential for harm through an unusual incentive structure. According to Amnesty International's 2023 report, "Automated Apartheid," Israeli soldiers are encouraged through a gamified competition — commanders award prizes to battalions that register the highest number of Palestinians in the database. Testimonies from soldiers stationed in Hebron in 2020 describe Blue Wolf generating rankings based on how many people each unit enrolled.
"You're constantly put into the terrain of no longer treating Palestinians as individual human beings with human dignity. You're operating by a gamified logic in which you will do everything in your power to map as many Palestinian faces as possible."
— MATT MAHMOUDI, AMNESTY INTERNATIONAL'S LEAD RESEARCHER ON AI AND HUMAN RIGHTS, SPEAKING TO THE GUARDIAN
This gamification is not a quirk — it is the system working as designed. More data means more enrollment, which means more coverage, which means more algorithmic decisions, which means more opportunities for chaos to enter. Palestinian poet Mosab Abu Toha was temporarily detained and interrogated after the system mistakenly identified him as affiliated with Hamas. Residents of areas with intensive surveillance — Damascus Gate in Jerusalem's Old City, Sheikh Jarrah in East Jerusalem — report avoiding social gatherings and self-censoring their public behavior for fear of being flagged. Activists have stopped attending protests after participants were identified through facial recognition and subsequently harassed.
In areas of the Old City of Jerusalem, surveillance cameras are installed every five meters. The result, documented by human rights organizations, is not security — it is an atmosphere of pervasive fear.
This is the butterfly effect in action. A photograph taken by a soldier becomes a data point. That data point feeds an algorithm. The algorithm makes a decision. A person is denied passage, detained, or harassed. And none of it could have been predicted with certainty at the start.
ICE's AI toolkit: the same logic, a different population
Thousands of miles away, U.S. Immigration and Customs Enforcement (ICE) has assembled a surveillance infrastructure that mirrors the logic of Blue Wolf and Red Wolf — comprehensive data collection, automated pattern-matching, and algorithmic decision-making that shapes the lives of real people in unpredictable ways.
$9.2M
ICE's 2025 contract with Clearview AI, its largest ever with the facial recognition company
60B+
Images in Clearview's database, scraped from social media without user consent
30
Contractors ICE planned to hire in 2025 to monitor social media around the clock
ICE agents now use an app called Mobile Fortify that allows them to scan faces or fingerprints with a smartphone and match results against multiple federal databases in real time. Beyond facial recognition, the agency relies on Palantir Technologies' platform, ImmigrationOS, which integrates driver's license scans, phone data, social media posts, travel records, tax information, and location data into unified investigative files. In 2025, ICE planned to hire nearly 30 contractors to monitor Facebook, TikTok, Instagram, and YouTube around the clock, turning everyday posts into enforcement leads.
The nonlinearity that chaos theory describes is visible here. A single social media post can trigger an algorithmic flag. That flag generates a permanent database entry. The entry raises a person's risk score. The risk score determines whether someone is detained, deported, or released. A minor digital action — a photo shared, a comment posted — can cascade into consequences that are catastrophic and irreversible.
What makes this particularly troubling is the connection between the companies building these systems. Palantir, which supplies ICE's surveillance infrastructure, also works with the Israeli military. The company's board met in Tel Aviv in 2024, and CEO Alex Karp stated that the company's work in the region had "never been more vital." These are not separate experiments in different contexts — they are the same technologies, refined through deployment on one vulnerable population and then sold for use against another.
The problem with the data: bias baked in from the start
In chaos theory, initial conditions matter enormously. A small error at the beginning of a calculation does not stay small — it compounds and amplifies as the system evolves. Biased training data in facial recognition systems works the same way: the errors introduced at the design stage grow larger as the system scales up and makes more decisions.
The evidence for this bias is well-documented and damning. In 2019, the U.S. National Institute of Standards and Technology (NIST) evaluated 189 facial recognition algorithms and found that for one-to-one matching, false positive rates for Asian and African American faces were 10 to 100 times higher than for Caucasian faces, depending on the algorithm. American Indians had the highest false positive rates of any U.S. demographic group. For one-to-many searches — the kind used in law enforcement — African American women had the highest false positive rates of all.
A 2018 study by AI researchers Joy Buolamwini and Timnit Gebru found an even starker disparity: facial recognition error rates were just 0.8% for light-skinned men, but 34.7% for dark-skinned women — a 43-fold difference.
What a "false positive" means in practice
In a checkpoint system like Red Wolf, a false positive means being denied passage to your own home
In a law enforcement database, it means being flagged as a criminal suspect when you have done nothing wrong
In an immigration system, it can trigger detention or deportation proceedings
In all cases, the burden falls disproportionately on people of color — who are most likely to be misidentified
These are not abstract statistics. In Minnesota, the ACLU sued on behalf of Kylese Perryman, a Black man who was falsely arrested and detained based solely on an incorrect facial recognition match. The ACLU also tested facial recognition on 400 members of the UCLA community and found 58 false matches with a criminal mugshot database — mostly people of color. A similar test using congressional photos wrongly identified 28 members of Congress as criminal suspects, again disproportionately people of color.
Each false match is what chaos theorists call a bifurcation point — a moment where a person's status shifts from ordinary citizen to suspect, not because of anything they did, but because an algorithm, trained on imperfect data, made a decision that cannot be easily undone.
The feedback loop: how surveillance creates the disorder it claims to prevent
Chaotic systems do not just produce unpredictable outcomes — they generate feedback loops that amplify instability over time. The surveillance systems described here are no different. They create a self-reinforcing cycle that, once set in motion, becomes increasingly difficult to control.
1 Pervasive surveillance creates fear among those being monitored — Palestinians near checkpoints, immigrants in American cities, activists aware that their faces and posts are being catalogued.
2 Fear changes behavior. People stop attending protests, stop posting online, stop gathering in public places. They self-censor.
3 Changed behavior creates new data patterns that algorithms interpret as suspicious — unusual movement, sudden silence online, avoidance of previously frequented locations.
4 "Suspicious" patterns trigger more surveillance, intensifying the fear that began the cycle. The system feeds on itself.
Al-Haq, a Palestinian human rights organization, documented this dynamic in a November 2025 submission to the UN Special Rapporteur, describing a "pervasive chilling effect that further segregates and fragments Palestinian society and empties civic space." Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, raised similar concerns about ICE's expanded capabilities: that tools designed for border enforcement would become instruments for silencing dissent.
Israeli soldiers, in testimony to Breaking the Silence, described a related psychological shift — they found themselves thinking of Palestinians not as individual people but as database entries, numbers to be processed. When a longtime neighbor was flagged by the algorithm as risky, the soldier did not let them pass. The automation had overridden human judgment entirely.
This is the deeper danger of these systems. Human beings have the capacity to notice context, make exceptions, and exercise moral judgment. Algorithms do not. When human discretion is removed from consequential decisions, the chaotic, unpredictable nature of these systems goes unchecked.
The illusion of control — and the case for something better
Gleick's central argument in Chaos is that many systems — weather, ecosystems, economies — are fundamentally unpredictable beyond short time horizons, no matter how much data we collect or how powerful our computers become. Human societies are among the most complex systems that exist. Human behavior is shaped by countless variables, feedback loops, and nonlinear interactions that simply cannot be captured in an algorithm.
And yet the surveillance technologies described here operate on the assumption that they can. That with enough data — enough faces in the database, enough social media posts monitored, enough checkpoints equipped with cameras — they can identify threats, predict behavior, and control populations. This is, in Gleick's terms, a fundamental category error: treating a chaotic system as if it were deterministic.
The result is not control. It is the appearance of control layered over a system producing chaos — arbitrary detentions, wrongful arrests, denied passages, cascading harms that no one designed and no one can fully explain. And as these systems scale up, the chaos scales with them. More data does not reduce the unpredictability; it amplifies it.
Palantir CEO Alex Karp has described algorithmic warfare systems as equivalent to "tactical nuclear weapons" in their power. If that comparison is even partly apt, it suggests that these tools — once battle-tested on Palestinians and immigrants — carry risks commensurate with weapons of war when exported to authoritarian governments seeking to suppress dissent.
What accountability would actually look like
Researchers like Joy Buolamwini and Timnit Gebru have been warning for years that AI systems reflect the biases of the societies that build them, and that without intervention, they will perpetuate and amplify those biases at scale. Their warnings point toward a set of concrete demands.
Principles for accountable AI surveillance
Full transparency about how surveillance systems are developed, deployed, and audited
Independent bias audits of training data before any system goes live
Human rights assessments as a precondition for AI deployment in law enforcement or immigration
Export controls on AI surveillance tools comparable to controls on conventional weapons
Legal accountability for companies that profit from surveillance infrastructure
Restoration of meaningful human judgment in decisions affecting people's freedom
Recognition that algorithmic prediction of human behavior has hard limits — and that pretending otherwise causes harm
The fundamental question is not whether these technologies work in a narrow technical sense. It is what they are for, who bears their costs, and whether the societies deploying them are willing to reckon honestly with what chaos theory makes plain: that complex human systems cannot be controlled by collecting more data and running more algorithms. They can only be navigated with humility, with accountability, and with the recognition that the people subject to these systems are not data points — they are human beings, with lives that cannot be reduced to a color-coded score.
As long as authorities insist otherwise, the chaos will continue. And it will continue to fall hardest on those least able to withstand it.
SOURCES
1. Gleick, J. (1987). Chaos: Making a New Science. Viking Penguin.
2. Amnesty International. (2023). "Automated Apartheid: How Facial Recognition Fragments, Segregates and Controls Palestinians in the OPT."
3. Stokel-Walker, C. (2023, May 2). "Israeli soldiers reveal how Palestinians are used to 'gamify' facial recognition tech." The Guardian.
4. Breaking the Silence. (2023). Testimonies on Blue Wolf and Red Wolf surveillance systems.
5. Al-Haq. (2025, November). Submission to the UN Special Rapporteur on the Situation of Human Rights in the Palestinian Territories.
6. National Institute of Standards and Technology. (2019). NIST evaluation of 189 facial recognition algorithms.
7. Buolamwini, J. & Gebru, T. (2018). "Gender Shades." MIT Media Lab.

Comments