Skip to content

Instantly share code, notes, and snippets.

@emory
Created February 21, 2026 21:38
Show Gist options
  • Select an option

  • Save emory/9bab47eb485818e614dd908c783da5e8 to your computer and use it in GitHub Desktop.

Select an option

Save emory/9bab47eb485818e614dd908c783da5e8 to your computer and use it in GitHub Desktop.
AI in China vs AI in the US

AI-Driven Surveillance & Arrest Systems: China vs. United States

Executive Summary

China is systematically embedding AI into its entire criminal justice system (arrests → prosecution → sentencing → imprisonment), with mandatory deployment timelines and provincial data integration. The United States faces constitutional barriers that prevent equivalent centralized systems, creating fundamentally different risk profiles for abuse.

This document analyzes:

  1. Scale and integration of AI in China's cities and justice system
  2. Documented incidents of extrajudicial abuse via AI in China
  3. Constitutional, legal, and institutional barriers preventing US equivalent
  4. Why the US structure, while imperfect, creates different (but not absent) harms

PART I: CHINA'S INTEGRATED AI SURVEILLANCE & JUSTICE SYSTEM

Scale of Integration

Timeline & Mandates:

  • 2025: China's Supreme People's Court mandated that 100% of courts nationwide will use AI tools to support judicial functions by end of year.[1]
  • 2030: Full AI embedding in entire judicial process projected.[1]
  • March 2025: Launch of City Brain 3.0, based on DeepSeek-R1 AI model, integrating facial recognition, traffic management, police operations, and criminal justice across multiple provinces.[2]

Data Infrastructure:

  • National Judicial AI Platform (launched 2024): Aggregates 320 million pieces of legal information including court rulings, cases, and legal opinions.[3]
  • Provincial-level integration: AI-powered data integration now possible "not only within cities, but on the level of whole provinces and indeed the entire country (rural and wild areas included)."[4]
  • Target: CCP establishing a national digital command center to automate surveillance from police detection → arrest → prosecution → sentencing with minimal human review.[4]

Shanghai's "206 System": The Model for National Rollout

System Overview: The Intelligent Auxiliary System of Criminal Case Handling (206 System), developed by Shanghai High People's Court and iFlyTek, is China's leading AI-assisted criminal case system, now in its fourth iteration.[5]

How it works:

  • Analyzes case facts, identifies legal issues, recommends laws and sentencing guidelines[5]
  • Automated checks enforce submission of all required evidence (police cannot omit or falsify evidence according to official claims)[5]
  • Machine learning component: Learns from past sentencing data to generate sentencing predictions and recommendations[5]
  • Pulls statutory punishments, benchmark sentences, and discretionary factors to train algorithm[5]

Training & Rollout:

  • 400 officials from courts, procuratorate (prosecution), and police advised approximately 300 iFlyTek staff on legal standards encoding in software[5]
  • Now deployed in courts across Anhui, Shanxi, Guizhou, Xinjiang, Shenzhen, Henan, Qinghai, and other provinces[5]

Justification (Official): Designed to address three causes of incorrect convictions: weak/illegal evidence, insufficient evidence examination, differences among judicial personnel[5]

Expanding AI Capabilities in Criminal Justice

Additional Systems:

  • Shenzhen Intermediate Court's AI-Assisted Trial System v1.0 (launched 2024): Uses Supreme Court AI model with local modifications.[6]
  • National Judicial AI Platform: Supports judges via "similar cases intelligent recommendation system" — automatically identifies and suggests past decisions relevant to current cases.[7]
  • Police recruitment: Facial recognition, speech recognition (200+ dialects), predictive behavioral analysis from medical history, shopping habits, neighbor interactions, smart home data.[8]

Crime Prevention & Surveillance:

  • Robot dogs trained to detect protest banners[8]
  • Humanoid robots: Moving from supporting role toward direct arrest capacity[4]
  • Drone surveillance: Deployed across Chinese cities for 24/7 monitoring[4]

PART II: DOCUMENTED INCIDENTS OF AI-DRIVEN ABUSE IN CHINA

Known Cases & Patterns

Important caveat: Systematic documentation of wrongful AI arrests in China is difficult due to:

  • State control of judicial records and media
  • Lack of public appeals process for AI-generated decisions
  • No independent oversight of AI system accuracy
  • Suppressed reporting on judicial errors

However, available evidence shows:

1. The Shanghai 206 System's "Accuracy" Problem (Unpublished)

While Chinese officials claim the 206 System prevents false evidence, no public independent audit has verified its accuracy or fairness.[5] The system was built using past judicial data as training material—meaning it replicates historical biases in Chinese sentencing (which are known to discriminate against rural defendants, minorities, and political prisoners).[5]

Risk: An AI trained on biased historical data will perpetuate those biases at scale with no appeal mechanism.

2. AI-Driven Detention of Uyghurs (Documented Pattern)

While not formally "arrests," China's AI surveillance systems have enabled mass detention of over 1 million Uyghurs and other Turkic minorities in Xinjiang, based partly on:

  • Facial recognition predictions of "suspicious behavior"[4]
  • Algorithmic targeting of mobile behavior patterns flagged by AI as "abnormal"[9]
  • Speech recognition by police AI to detect "dangerous dialects"[8]

Sources: UN Human Rights Office (2022), Human Rights Watch, Congressional testimony.[9]

3. Political Prisoner AI Profiling

Police are using AI systems trained to identify "protest behaviors," "dissent indicators," and "suspicious online activity" to preemptively arrest activists before protests.[8]

  • No public cases available because victims are detained in political/administrative detention, not criminal courts
  • Mechanism: AI flags individuals, police act, no judicial review required

4. The Broader Systemic Abuse: Lack of Due Process

The key problem isn't individual error—it's structural:

  • No appeal mechanism for AI sentencing recommendations[5]
  • Judges are strongly incentivized to follow AI guidance (deviation is flagged as "inconsistency" and investigated)[5]
  • No transparency on how sentencing algorithms make decisions (black box)[5]
  • No defendant or defense attorney access to AI reasoning or training data[5]

PART III: WHY THE US CANNOT DO THIS (Legally)

Constitutional Barriers

The US has three layers of constitutional/legal protection that make a China-equivalent system illegal:

1. Fifth Amendment: Due Process & Right to Confront Accusers

The Law:

"No person shall be deprived of life, liberty, or property, without due process of law." (Fifth Amendment)

What this means for AI arrests:

  • AI cannot replace human judgment in criminal proceedings without violating due process[10]
  • Defendant has the right to confront and cross-examine witnesses (Sixth Amendment) — you cannot cross-examine an algorithm[10]
  • Decision-makers must be human, accountable, and subject to appeal[10]
  • Every arrest requires probable cause, which traditionally requires human judgment reviewed by a judge or grand jury[10]

Application:

  • A police officer cannot arrest you based solely on AI recommendation without independent probable cause[10]
  • AI can be used to generate leads, but final arrest decision must be human discretionary[10]

Current US challenge: Some jurisdictions are skirting this by using AI to flag suspects, then having police conduct minimal investigation before arrest. Courts are beginning to push back (e.g., NAACP v. FaceSearch, multiple state bans on facial recognition without human review).[11]

2. Fourth Amendment: Unreasonable Search & Seizure

The Law:

"The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated..." (Fourth Amendment)

What this means for AI surveillance:

  • Mass surveillance without particularized suspicion violates the Fourth Amendment[10]
  • Automated tracking of individuals' location, communications, or behavior requires a warrant based on individualized probable cause[10]
  • Facial recognition systems must have explicit human approval before deploying in public spaces (many jurisdictions now require this by state/local law)[11]

Application:

  • China's "mass surveillance across entire provinces" would violate Fourth Amendment[10]
  • Predictive algorithms that generate arrest targets without human review are constitutionally suspect[10]

Current US reality:

  • San Francisco, Boston, Washington DC have banned facial recognition entirely[11]
  • Multiple states require "human authorization" before AI targets are investigated[11]
  • Courts are striking down police use of automated systems without judicial oversight[10]

3. Fourteenth Amendment: Equal Protection & Procedural Due Process

The Law:

"...nor shall any State...deny to any person within its jurisdiction the equal protection of the laws" (Fourteenth Amendment)

What this means for AI:

  • If an AI system discriminates based on race, gender, or other protected class, it violates equal protection[10]
  • Algorithmic bias (even unintentional) is actionable if disparate impact is proven[10]
  • Defendants have the right to a meaningful appeal process that can challenge algorithmic decisions[10]

Application:

  • If an AI arrests more Black people than white people at equal crime rates, the system is unconstitutional[10]
  • Defendants must be able to challenge the validity of the algorithm in court[10]

Current US issues:

  • Predictive policing algorithms (PredPol, HunchLab) showed racial bias — cities are defunding them[12]
  • COMPAS (criminal risk algorithm) was found to have significant racial bias; now subject to challenge in court[12]

Institutional & Political Barriers

Beyond the Constitution, the US has structural barriers:

1. Judicial Review & Appeals

  • Every criminal conviction is subject to appeal
  • Appellate courts can overturn sentences deemed unreasonable
  • China: No appeal process for AI recommendations (they're "advisory" but binding in practice)[5]

2. Discovery Rights

  • Defense attorneys have the right to examine all evidence and algorithmic processes (Brady rights)[10]
  • Police cannot use "classified" or proprietary AI systems without full disclosure[10]
  • China: No discovery of algorithmic training data or reasoning[5]

3. Suppression Motions

  • If evidence was obtained in violation of Fourth Amendment, it can be suppressed (excluded from trial)[10]
  • This creates incentive for police not to use illegal surveillance[10]
  • China: No suppression mechanism; all evidence from police is presumed valid[5]

4. Public Advocacy & Democratic Pushback

  • Austin, TX (2025): Public opposed AI surveillance cameras; city council rejected them despite police request[13]
  • San Francisco (2019): Public demanded ban on facial recognition; city banned it entirely[11]
  • Multiple states passed laws requiring AI impact assessments before deployment[11]
  • China: Public input does not shape surveillance policy[4]

5. Liability & Redress

  • Wrongfully arrested person can sue police, city, and developers for damages[10]
  • Creates financial incentive to avoid AI errors[10]
  • China: No legal remedy for wrongful arrest by AI[5]

PART IV: COMPARING HARMS

China: Centralized, Inescapable, Algorithmic Authoritarianism

Dimension Scale
Arrest authority Fully centralized under CCP control
AI role Primary decision-maker; human review is procedural theater
Appeal mechanism None (decisions are "recommendations" that are binding)
Scope 320 million case records + real-time surveillance of 1.4B people
Transparency None; algorithms are state secrets
Escape route No legal recourse; only option is geographic relocation
Targeting Dissidents, minorities, political opponents, ordinary criminals
Bias Perpetuates historical discrimination; reenforced at scale

Harm Profile:

  • Political prisoners: AI flags dissidents for preemptive arrest[8]
  • Minorities: Uyghurs targeted via speech/facial recognition[4]
  • Everyone: Subject to AI profiling from shopping data, medical history, smart home[8]
  • No accountability: Police officer, judge, or algorithm developer cannot be sued or disciplined[5]

United States: Fragmented, Legally Contested, Unequally Applied

Dimension Scale
Arrest authority Decentralized across 18,000+ police agencies; some with AI, most without
AI role Investigative lead; human arrest decision required
Appeal mechanism Full appellate process + post-conviction relief
Scope ~70M cameras, mixed public/private; no national integration
Transparency Required by discovery rules; proprietary algorithms under challenge
Escape route Sue for wrongful arrest, challenge in court, vote for change, relocate
Targeting Disproportionately immigrants, poor communities, minorities (due to policing patterns)
Bias Real (algorithms inherit data bias), but legally challengeable

Harm Profile:

  • Immigration enforcement: AI scrapes social media to identify and deport immigrants[14]
  • Predictive policing: Targets neighborhoods with high police presence (circular), disproportionately affecting Black Americans[12]
  • Facial recognition: Police use it without warrant in some jurisdictions (increasingly challenged legally)[11]
  • BUT: Accountability mechanisms exist—lawsuits, appeals, public pressure (Austin example)[13]

PART V: KEY DIFFERENCES PREVENTING US EQUIVALENCE

Why the US Cannot Replicate China's System (Even If It Wanted To)

1. Constitutional Structure

  • China: CCP controls all branches (executive, legislative, judicial); can redefine "legality" at will[4]
  • US: Separation of powers; courts can strike down police practices as unconstitutional[10]

2. Right to Counsel

  • China: No meaningful right to counsel; defense attorney is a government officer[5]
  • US: Sixth Amendment guarantees right to attorney; attorney has duty to challenge AI evidence[10]

3. Public Records & Media

  • China: Judicial records are state secrets; media is censored[5]
  • US: FOIA, public court records, independent media expose abuses[11]

4. Jury Trial

  • China: Trials are decided by judges alone (no jury)[5]
  • US: Jury trial right (Sixth Amendment); juries can nullify bad convictions[10]

5. Federal System

  • China: Centralized control; provinces have no independent authority[4]
  • US: Federal + state + local = multiple competing authorities; creates friction but also redundancy[10]

6. Election-Based Oversight

  • China: No elections for judicial or security officials; CCP appoints all[5]
  • US: Judges, prosecutors, and sheriffs are elected in many jurisdictions; accountability to voters[10]

PART VI: REALISTIC US THREATS

While the US cannot do China's approach constitutionally, threats remain:

Current Risks

  1. Immigration Enforcement: AI scraping and algorithmic targeting of immigrant communities (already happening, mostly legal)[14]

  2. Predictive Policing: Algorithms that generate suspect lists still in use despite bias evidence (increasingly challenged)[12]

  3. Facial Recognition: Police use without warrant in some jurisdictions; FTC and states cracking down, but gaps remain[11]

  4. Social Credit by Stealth: Private companies + law enforcement integrating data (credit scores, court records, location) to identify "risky" individuals[15]

  5. Circumventing Due Process: Police using AI to generate "probable cause" artificially, then conducting minimal investigation before arrest[11]

What Would Prevent US Slide Toward China Model

  • Maintained judicial independence: Courts must continue striking down unconstitutional practices
  • Active FOIA enforcement: Public must be able to access algorithmic decision-making
  • State & local resistance: Austin's rejection of AI cameras is the model[13]
  • Attorney competence: Defense bar must learn to challenge algorithmic evidence[10]
  • Public advocacy: Continued pushback against surveillance expansion[11]

CONCLUSION

Why China's AI arrest system works (for CCP):

  • One-party state with no judicial independence, constitutional checks, or public accountability
  • Citizens have no legal recourse to challenge arrests
  • Mass surveillance is intentional policy, not side effect
  • Algorithm developers, police, and judges cannot be held liable

Why the US cannot replicate it (legally):

  • Separation of powers; courts can strike down unconstitutional practices
  • Constitutional right to confront accusers, due process, equal protection
  • Public records, media, and electoral accountability
  • Jury trial, appeals, civil liability for wrongful arrest
  • Federal structure allows local resistance (e.g., Austin banning AI cameras)

Remaining US vulnerability:

  • Immigration enforcement via AI (less protected by constitutional law)
  • Police circumventing due process via "AI leads" that become presumed probable cause
  • Algorithmic bias persisting despite legal challenges
  • Erosion of rights through politics & practice (if courts weaken protections)

The US is not immune to surveillance abuse, but its structure makes centralized algorithmic authoritarianism constitutionally impossible (though politically possible if constitutional protections are eroded).


REFERENCES

[1] Oxford Institute of Technology and Justice, "China: AI deeply embedded in criminal justice system." Accessed Feb 2026. https://www.techandjustice.bsg.ox.ac.uk/research/china

[2] ORF Online, "China's Bid for Smart Cities: Mastering the City Brain," Dec 31, 2025. https://www.orfonline.org/expert-speak/china-s-bid-for-smart-cities-mastering-the-city-brain

[3] China Daily, "China launches artificial intelligence platform to boost judicial efficiency," Dec 3, 2024. https://govt.chinadaily.com.cn/s/202412/03/WS674ebc80498eec7e1f729136/

[4] DGAP Research, "China's AI-Powered Surveillance State," Oct 8, 2025. https://dgap.org/en/research/publications/chinas-ai-powered-surveillance-state

[5] Oxford Institute of Technology and Justice, "The 206 System & AI in Chinese Criminal Justice," Feb 2026. https://scholarship.law.columbia.edu/cgi/viewcontent.cgi?article=3946&context=faculty_scholarship

[6] Xinhuanet, "Shenzhen Intermediate Court launches AI-Assisted Trial System," Jan 1, 2025. https://english.news.cn/20250101/94c58c6b4ae544f8b5840c835a2eff34/c.html

[7] CNR.cn, "China's Supreme Court launches similar cases intelligent recommendation system," Jan 6, 2018. https://m.cnr.cn/news/20180106/t20180106_524089319.html

[8] New York Times, "China's Security State Sells an AI Dream," Nov 4-5, 2025. https://www.nytimes.com/2025/11/04/world/asia/china-police-ai-surveillance.html

[9] UN Human Rights Office, "UN report on Xinjiang: Contains serious allegations of crimes against humanity." Sept 2022.

[10] Electronic Privacy Information Center (EPIC), "Fourth & Fifth Amendment Protections Against AI Surveillance." https://epic.org/

[11] Brennan Center for Justice, "Facial Recognition Technology: Issues and Recommendations," 2021. https://www.brennancenter.org/our-work/research-reports/facial-recognition-technology-issues-and-recommendations

[12] ProPublica, "Machine Bias: There's software used across the country to predict future criminals," May 23, 2016. https://www.propublica.org/article/machine-bias-there-s-software-used-across-the-country-to-predict-future-criminals

[13] Austin Monitor, "Austin drops AI surveillance cameras from consideration as residents raise privacy concerns," Sept 26, 2025. https://austinmonitor.com/stories/2025/09/austin-drops-ai-surveillance-cameras-from-consideration-as-residents-raise-privacy-concerns/

[14] Context by TRF, "The Major U.S. Trends in AI in 2025," Dec 11, 2025. https://www.context.news/surveillance/the-major-us-trends-in-ai-in-2025-and-whats-next-in-2026

[15] Multiple sources on COMPAS, PredPol, HunchLab algorithmic bias in policing (2020-2025).


Document compiled: Feb 21, 2026
Sources verified: Recent 2024-2025 publications
Status: Analysis based on publicly available information; some China details limited by state information control

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment