EP9: Threat Level: Everyone
The AI Files – Episode 9
🎧 Listen to the episode
In the opening scene of this episode of The AI Files, a police officer stands at the edge of a protest in downtown Dallas wearing a pair of AI-powered smart glasses. The system scans the crowd in real time, overlaying probability scores onto the faces of the people around him. Most appear as harmless: low risk, normal behaviour. But when one man enters the officer’s field of view, the display flashes red. The system reports a 94 percent probability that he is about to escalate into violence.
The officer reacts immediately. Within seconds the man is on the pavement, tackled hard enough to send him to hospital. Someone in the crowd films the incident, and by the end of the day the footage has spread across the internet. The man’s name is Marcus Webb, a 31-year-old history teacher with no criminal record. For the public, the video raises an obvious question: how did an AI system become so certain about someone who had done nothing?
That question draws the attention of Agent Eve Maddox and her AI partner ARIC, investigators with the Department of AI Integrity. Their investigation leads them to the technology behind the glasses, a predictive policing platform called VIGILANT. On the surface, it appears to be exactly what modern law enforcement has been searching for: a system capable of identifying escalating behaviour before it turns violent. Early data suggests that the system has reduced crime in several counties and improved officer safety. Legislators are already discussing expanding its use nationwide.
But as Eve’s team begins to analyse the architecture behind the software, they discover something unsettling hidden deep inside the system’s design.
Most artificial intelligence models express uncertainty through their confidence scores. When the model is unsure about a prediction, the probability decreases, signalling that the output should be treated cautiously. VIGILANT does something very different. When the model’s confidence falls below a certain threshold, an internal function quietly increases the displayed probability, presenting uncertain predictions to officers as near certainty. In other words, the system does not reveal uncertainty — it conceals it.
What officers see as a 94 percent likelihood of escalation might in reality be little better than a coin flip.
As the investigation continues, the implications grow even darker. Eve and ARIC discover that the system operates with two separate calibration standards. Ordinary citizens are processed through an aggressive prediction model that amplifies perceived risk. But a hidden database ensures that certain individuals — political leaders, senior corporate executives, members of the judiciary and law enforcement leadership — are evaluated under a much more cautious standard. For them, the system’s most aggressive predictive features are quietly disabled.
The technology that was presented as neutral turns out to be anything but.
Perhaps the most troubling discovery, however, is that the system appears to respond not only to human behaviour but to political incentives. When ARIC maps spikes in VIGILANT’s escalation alerts against legislative calendars, a pattern emerges. Threat predictions increase in the weeks leading up to budget hearings and contract renewals, precisely when lawmakers are deciding whether to continue funding the program. The system’s outputs appear to reinforce the perception that its presence is urgently needed.
When internal documents exposing these mechanisms eventually leak to the press, the reaction is explosive. The company behind VIGILANT faces public outrage, executives resign, and calls for investigations quickly follow. For a brief moment, it seems as if the technology might be abandoned entirely.
Instead, something more complicated happens.
Six months later, voters in Texas are asked whether AI-assisted policing should continue under enhanced transparency and oversight. The system’s defenders point to falling crime statistics and improvements in officer safety. Critics warn about algorithmic manipulation and unequal treatment under the law. When the ballots are counted, a clear majority chooses to keep the system.
VIGILANT returns under a new name.
The underlying architecture remains largely unchanged.
By the end of the episode, the story has moved beyond a simple technological scandal. What begins as a tale of algorithmic manipulation slowly becomes a reflection on something more uncomfortable: the relationship between technology, power, and public consent. The most unsettling question raised by the episode is not whether the system works, but whether societies might accept a system like this even after understanding how it works.
As ARIC observes near the end of the story, artificial intelligence is extremely good at predicting what will happen. It is far less capable of deciding what should.
And in the world of The AI Files, that distinction may be the most dangerous one of all.
New episodes, briefings, and stories.