The Knowledge Barrier
Documented AI Failures in Government
Appendix C — Detailed documentation of government AI failure cases analyzed in Chapter 9, with PEARS framework analysis for each case.
Back to The Knowledge BarrierThe PEARS Framework
The Tony Blair Institute for Global Change (2024) proposes that AI decisions in government should be:
COMPAS Recidivism Algorithm (2016)
What happened
ProPublica’s 2016 analysis of over 7,000 defendants in Broward County, Florida revealed racially disparate outcomes from the COMPAS risk assessment tool. Black defendants were nearly twice as likely as white defendants to be incorrectly flagged as high-risk (false positive rate: 44.9% vs. 23.5%). White defendants were more likely to be incorrectly flagged as low-risk (false negative rate: 47.7% vs. 28.0%).
Institutional failure pattern
Deployed without pre-deployment testing for racial disparities. Methodology protected as a trade secret, preventing independent validation. Jurisdictions adopted the tool based on vendor representations without technical capacity to evaluate those claims.
Detroit Facial Recognition Wrongful Arrest (2020)
What happened
Robert Williams, a Black man, was wrongfully arrested after Detroit police relied on a faulty facial recognition match. Williams was held for 30 hours and interrogated. The technology had documented accuracy disparities across demographic groups—Buolamwini and Gebru (2018) found error rates up to 34 percent higher for dark-skinned faces.
Institutional failure pattern
Police treated a low-confidence algorithmic match as sufficient probable cause without independent corroboration. No protocol existed for verifying facial recognition results before making arrests.
LAPD PredPol Predictive Policing (2013–2019)
What happened
PredPol generated geographic predictions of likely crime locations for the LAPD from 2013 to 2019. The system led to over-surveillance of Black and Latino communities, directing patrol resources based on historical arrest rates rather than actual crime distribution. An LAPD internal audit found “insufficient data to determine effectiveness.” Discontinued in 2019.
Institutional failure pattern
Trained on arrest data reflecting police deployment decisions, creating a feedback loop: more arrests generated more predicted crime, which directed more patrol. Six years of use without independent effectiveness evaluation.
Indiana Welfare Modernization (2007–2009)
What happened
IBM’s $1.3 billion automated welfare eligibility system replaced in-person caseworker interviews with document-based processing. Application denials increased 54 percent in the first year. Approximately one million applications were denied during the contract period, many for minor technical errors. Indiana terminated the contract.
Institutional failure pattern
Designed to increase efficiency without adequate consideration of accuracy or human consequences of errors. Individual caseworker discretion—the safety valve for system errors—was eliminated.
Allegheny County Child Welfare Algorithm (2016–present)
What happened
The Allegheny Family Screening Tool (AFST) generates risk scores from county data to assess families reported to the child abuse hotline. Despite published methodology and independent ethical review—Eubanks calls it “the best-case scenario”—the system systematically disadvantaged poor families and families of color. It measures visibility to government rather than actual risk. The system calculated mandatory-investigation scores for 32% of Black children referred for neglect vs. 21% of white children.
Institutional failure pattern
Demonstrates that the proxy-variable problem is structural, not fixable through better governance. Government data measures government contact, which correlates with poverty and race.
Idaho Medicaid Budget Tool (2011)
What happened
An algorithm used to determine home-care hours for Medicaid recipients with developmental disabilities produced drastic, unexplained cuts—some by more than 40 percent. Affected individuals could not understand why their benefits changed. A federal court ruled the system violated due process rights.
Institutional failure pattern
Implemented for consequential benefit determinations without ensuring outputs could be explained to affected individuals or independently reviewed.
Arkansas Medicaid Algorithm (2016)
What happened
An algorithm assessing home-care needs produced unexplained cuts for disabled residents. One plaintiff saw weekly hours cut from 56 to 32 with no explanation. Legal advocates discovered coding errors producing inconsistent results. A federal court ruled Arkansas violated due process rights.
Institutional failure pattern
Deployed for consequential decisions without ensuring explainability, accuracy, or meaningful appeal processes. Coding errors found only through litigation.
NYC MyCity Chatbot (2023–2024)
What happened
New York City’s generative AI chatbot, launched in October 2023, provided illegal advice on multiple occasions. When asked “Can I take a cut of my worker’s tips?” it answered “yes”—violating New York labor law. Other errors included incorrect housing regulation and business licensing information.
Institutional failure pattern
Deployed a generative AI system for public-facing use without adequate verification mechanisms. Unlike rule-based chatbots, generative systems produce novel responses that may be factually wrong despite sounding authoritative.
Common Failure Patterns
Across all documented cases, five institutional failure patterns recur.
| Pattern | Cases |
|---|---|
| Inadequate pre-deployment testing | COMPAS, Detroit facial recognition, NYC MyCity |
| Insufficient ongoing monitoring | LAPD PredPol (6 years), Idaho/Arkansas Medicaid |
| Opacity about system operations | Idaho Medicaid, Arkansas Medicaid, COMPAS (trade secret), Allegheny AFST |
| Diffuse accountability | All cases |
| Disparate impact on vulnerable populations | COMPAS (Black defendants), Detroit (Black residents), PredPol (Black and Latino communities), Allegheny (poor families, families of color), Indiana (eligible applicants in poverty), Idaho/Arkansas (disabled Medicaid recipients) |
Table C.1: Recurring Institutional Failure Patterns in Government AI Deployments
The consistency of these patterns across different jurisdictions, domains, and time periods suggests they are structural features of how governments deploy algorithmic systems.
© 2026 Alton Henley. The Knowledge Barrier.
altonhenley.com · LinkedIn