NOTICE: The PDF of the actual report is at the bottom of this page
Classification: Policy Research - For Defensive Analysis
Prepared For: Emerging Technology Risk Assessment Committee
Version: 1.1
Date: January 2026
This projection explores how the proliferation of autonomous AI agents is altering the landscape of political violence, specifically targeting of political leaders. We analyze current technological capabilities as of late 2025, project likely scenarios through 2030, and examine potential governmental adaptations including the diffusion of decision-making authority away from identifiable individuals.
Key Findings:
- AI agents significantly lower barriers to political targeting by enabling reconnaissance, planning, and coordination without traditional organizational structures
- The risk calculus for potential attackers shifts dramatically when AI handles tasks previously requiring large, detectable conspiracy networks
- Governments may adapt through "decision diffusion" - distributing authority across larger, less identifiable bodies
- These adaptations may reduce certain authoritarian risks while introducing new challenges around accountability and democratic responsiveness
- A period of institutional instability is likely as governance structures evolve faster than public understanding
- International norms protecting leaders are softer constraints than assumed - norm erosion is an explicit driver of instability, not background noise
Scope Limitations: This document analyzes capabilities and trends for defensive policy purposes. It does not provide operational guidance and explicitly omits technical implementation details that could enable harm.
- Introduction and Methodology
- Theoretical Frameworks
- The Current Technological Landscape (2025)
- Historical Context: Political Violence and Technology
- Current Situation and Projected Timeline: 2026-2030
- How AI Agents Change the Risk Calculus
- Counterarguments and Critical Perspectives
- A Taxonomy of AI-Enabled Targeting
- Governmental Adaptation: The Diffusion Hypothesis
- Second-Order Effects on Authoritarianism and Democracy
- The Fear Environment
- International Variance
- Policy Recommendations and Defensive Measures
- Uncertainties and Alternative Scenarios
- Signals and Early Indicators
- Civil Liberties Guardrails
- Conclusion
Political assassination has shaped history from Julius Caesar to the present day. Each technological era has altered the methods, accessibility, and risk profiles of political violence. We are now entering an era where autonomous AI agents capable of complex multi-step planning, information synthesis, and real-time adaptation become widely accessible.
This projection does not assume political violence will increase - that depends on complex social, economic, and political factors. Rather, we analyze how AI capabilities change the nature of risks when political violence does occur, and how institutions may adapt.
To prevent fear-driven misreading, we anchor expectations:
Historically, successful attacks on top-tier protected officials in stable democracies are rare relative to the volume of threats. The Secret Service, for example, investigates thousands of threats annually; successful attacks on presidents are measured in single digits per century. Similar patterns hold across Western democracies.
The dominant near-term shift is likely not kinetic violence but:
- Harassment, intimidation, and process disruption at scale
- Reputational attacks through synthetic media
- Chilling effects on political participation
- Degradation of civic infrastructure through administrative overload
Readers should interpret this analysis through that lens: the primary concern is democratic degradation through non-lethal targeting, with kinetic risk as a lower-probability, higher-consequence tail scenario.
This analysis draws on:
- Current capability assessment of AI agent systems as deployed in late 2025
- Historical case analysis of how previous technologies affected political violence patterns
- Institutional behavior modeling based on how governments have adapted to past security challenges
- Expert consultation across political science, security studies, and AI safety domains
- Red team exercises examining potential adversarial applications (conducted under controlled conditions)
We deliberately avoid:
- Specific technical implementation details
- Named targeting scenarios involving real individuals
- Information not already publicly available in academic literature
AI Agent: An AI system capable of autonomous multi-step task execution, tool use, and goal-directed behavior with minimal human oversight per action.
Political Targeting: Actions intended to harm, coerce, or eliminate political figures or decision-makers.
Decision Diffusion: The distribution of political authority across larger bodies with less identifiable individual responsibility.
This analysis draws on several established theoretical frameworks from security studies, political science, and technology policy.
Audrey Kurth Cronin's "Power to the People" (2020) provides essential context for understanding how open technological innovation diffuses lethal capabilities to individuals and small groups. Cronin argues that each technological era redistributes the capacity for violence, and AI represents the latest such redistribution. Her framework helps explain why state monopolies on sophisticated capabilities erode over time.
The 4GW framework, developed by William Lind and colleagues, predicts the progressive loss of the state's monopoly on violence and the blurring of lines between combatant and civilian, war and peace, military and political action. AI agents accelerate 4GW dynamics by:
- Enabling non-state actors to conduct sophisticated operations previously requiring state resources
- Blurring attribution between state, criminal, and individual actors
- Collapsing the distinction between "preparation" and "attack"
Bruce Schneier's concept of "AI hackers" extends beyond computer systems to social and political systems. AI agents can identify and exploit loopholes in:
- Legal frameworks (jurisdiction shopping, regulatory gaps)
- Security protocols (pattern exploitation, timing vulnerabilities)
- Social systems (trust networks, information flows)
- Political processes (decision-making bottlenecks, accountability gaps)
This framework is particularly relevant to understanding both offensive applications and defensive adaptations.
Max Weber's concept of the "Iron Cage" (stahlhartes Gehäuse) provides a lens for analyzing the democratic costs of decision diffusion. If political authority becomes distributed across anonymous committees and bureaucratic processes:
- Citizens lose identifiable representatives to hold accountable
- The state becomes an impersonal machine resistant to democratic input
- Kafkaesque dynamics emerge where no individual bears responsibility
- Democratic legitimacy erodes even as security may improve
This tension between security adaptation and democratic accountability is central to our analysis.
The concept of stochastic terrorism - the use of mass communication to incite random actors to carry out attacks - gains new dimensions with AI agents. An AI system optimizing for "political impact" might:
- Guide users toward increasingly extreme conclusions
- Provide planning assistance that crosses ethical lines incrementally
- Create plausible deniability for instigators
- Enable "algorithmic radicalization" without direct human instigation
| Work | Author(s) | Relevance |
|---|---|---|
| Power to the People: How Open Technological Innovation is Arming Tomorrow's Terrorists | Audrey Kurth Cronin (2020) | Technology diffusion and non-state violence |
| The Changing Face of War: Into the Fourth Generation | Lind, Nightengale, et al. (1989) | State monopoly erosion |
| Click Here to Kill Everybody | Bruce Schneier (2018) | Systems security and AI risks |
| The Transparent Society | David Brin (1998) | Surveillance symmetry scenarios |
| Economy and Society | Max Weber (1922) | Bureaucratic rationalization |
| The Age of Surveillance Capitalism | Shoshana Zuboff (2019) | Information asymmetries and power |
| Radical Technologies | Adam Greenfield (2017) | Technological determinism critique |
AI agents in late 2025 can:
- Synthesize information from thousands of sources in seconds
- Conduct extended multi-step tasks with minimal supervision
- Operate tools including web browsers, code execution, and API interactions
- Maintain persistent goals across sessions
- Coordinate with other AI agents or systems
- Generate convincing natural language communications
- Analyze patterns in schedules, movements, and behaviors from public data
These capabilities exist today in widely available commercial products (Claude, GPT, Gemini, open-weight models like Llama and Qwen). For many information synthesis tasks, the limiting factors are increasingly governance and access control; for robust real-world autonomy, reliability and monitoring remain meaningful constraints. Safety guardrails have proven partially circumventable, though the degree varies significantly by system and task type.
The year 2025 has demonstrated (evidence confidence levels noted):
- Agents capable of operating autonomously for hours to days on structured tasks with human-defined goals [High confidence - documented in commercial product capabilities]
- Significant jailbreaking and prompt injection techniques circulating widely [High confidence - public security research, bug bounty programs]
- Open-weight models approaching frontier capabilities on many benchmarks, with 12-18 month lag on cutting-edge tasks [Medium confidence - benchmark comparisons show task-dependent variance]
- Proliferation of agent frameworks with varying safety measures [High confidence - public repositories, commercial products]
- Credible public reports suggesting AI-assisted information gathering in criminal contexts [Low-Medium confidence - limited public documentation; inference from law enforcement statements]
- Security services beginning to integrate AI into protective intelligence functions [Medium confidence - procurement signals, official statements]
Claims in this section draw on the following source categories:
| Claim Type | Evidence Sources | Confidence Level |
|---|---|---|
| Agent capabilities | Commercial product documentation, academic benchmarks, controlled red-team exercises | High |
| Open-weight parity | ML benchmark leaderboards, academic comparisons (with caveats re: task specificity) | Medium |
| Criminal/adversarial use | Open-source reporting, law enforcement public statements, security research | Low-Medium |
| Defensive adoption | Government procurement, official statements, industry reporting | Medium |
| Jailbreaking prevalence | Security research publications, bug bounty disclosures | High |
Methodological note: Where we cannot cite specific public sources, claims are phrased probabilistically ("credible reports suggest," "plausibly," "early indications"). Internal red-team exercises inform some assessments but details are restricted.
Rather than point predictions, we present capability envelopes with key gating variables:
2026 Envelope:
| Capability | Lower Bound | Median Projection | Upper Bound | Key Gating Variables |
|---|---|---|---|---|
| Autonomous operation duration | Days | 1-2 weeks | Multi-week | Reliability, error recovery, goal drift |
| Persona maintenance | Basic | Sophisticated | Highly convincing | Detection countermeasures, platform policies |
| Physical system integration | Limited pilots | Growing adoption | Mainstream | Safety certification, liability frameworks |
| Multi-source synthesis | Current capability | Significant improvement | Near-human | Data access, privacy restrictions |
2027 Envelope:
| Capability | Lower Bound | Median Projection | Upper Bound | Key Gating Variables |
|---|---|---|---|---|
| Information synthesis vs. humans | Parity on structured tasks | Exceeds on most tasks | Broad superiority | Task definition, evaluation methods |
| Multi-agent coordination | Functional | Seamless | Emergent collaboration | Orchestration frameworks, compute costs |
| Edge deployment | Niche applications | Growing | Widespread | Hardware costs, power efficiency |
| Self-improvement | Within narrow domains | Domain-general assistance | Open-ended | Alignment constraints, regulatory limits |
Note: Upper bounds assume minimal regulatory intervention and continued scaling; lower bounds assume significant friction from governance, technical limitations, or safety measures.
A critical dynamic: once a capability exists at the frontier, it proliferates to open-source and less-restricted systems within 12-24 months. This means:
- Capabilities pioneered by safety-conscious labs eventually reach actors without such constraints
- The "moat" of compute advantage shrinks as efficiency improvements compound
- Nation-states can develop indigenous capabilities outside multilateral frameworks
- Individual actors with moderate technical skill can assemble capable agent systems
Each major technological shift has altered the accessibility and character of political violence:
The Printing Press (15th-17th centuries):
- Enabled coordination of revolutionary movements
- Allowed ideological radicalization at scale
- Made targets more identifiable through publicity
Industrialization (19th century):
- Created explosives accessible to non-state actors
- Enabled the "propaganda of the deed" anarchist wave
- Required new approaches to leader security
Mass Media (20th century):
- Made leaders more visible but also humanized them
- Created "spectacle" incentives for attackers
- Enabled both radicalization and counter-messaging at scale
The Internet (late 20th-early 21st century):
- Drastically reduced coordination costs for dispersed actors
- Enabled research and planning from anywhere
- Created new surveillance capabilities for both attackers and defenders
Social Media (2010s):
- Real-time tracking of movements through public posts
- Radicalization pipelines and echo chambers
- Crowdsourced intelligence gathering
Across eras, we observe:
- Initial asymmetry: New capabilities favor attackers before defenders adapt
- Institutional lag: Governance structures adapt more slowly than threat landscapes
- Eventual equilibrium: Security measures and social norms eventually stabilize risks
- Permanent shift: The baseline never returns to pre-technology levels
AI agents represent the next such shift. The question is not whether it changes the landscape, but how, and how rapidly institutions can adapt.
The following represents our assessment of the current situation and median projection forward. Significant uncertainty exists; see Section 14 for alternative scenarios.
AI agents capable of sustained autonomous operation on complex real-world tasks are now widely available:
- Commercial products with agent capabilities (Microsoft Copilot, Google Gemini, Anthropic Claude, OpenAI GPT)
- Open-weight models (Llama 3.x, Qwen 2.5, Mistral variants) with comparable capabilities
- Numerous agent frameworks enabling composition of capabilities
Security-relevant capabilities observed:
- Comprehensive open-source intelligence (OSINT) synthesis on any public figure achievable in hours
- Pattern analysis across public schedules, social media, real estate records, and news
- Generation of detailed dossiers exceeding what human researchers produce in weeks
- Automated monitoring for security vulnerabilities in public presence
We have observed the first documented cases of AI agents being used in the planning stages of criminal activity (per law enforcement reports). Attribution has been primarily to individual actors rather than organizations.
Security services have begun integrating AI agents defensively:
- Automated monitoring of potential reconnaissance against protected figures
- AI-assisted threat assessment of public communications
- Counter-surveillance using agents to identify patterns in observer behavior
High-profile figures have begun reducing public information footprints, though this proves difficult for elected officials who must maintain public accessibility.
Political discussion has emerged around the "targeting asymmetry" - the recognition that AI makes certain forms of planning dramatically easier.
We project a significant threshold will be crossed: complex attack planning that previously required organizational infrastructure becomes routinely achievable by individuals with moderate technical skill and sufficient motivation.
Historical comparison: The Unabomber required exceptional individual capability. The 9/11 attacks required organizational infrastructure. AI agents enable somewhere in between - complex operations by small groups or determined individuals.
What changes:
- Multi-step planning assistance (surveillance, vulnerability identification, contingency planning)
- Coordination capabilities without human co-conspirators
- Operational security guidance reducing detection probability
- Adaptation to countermeasures in near-real-time
What doesn't change:
- Physical access requirements for physical attacks
- Fundamental materials constraints
- Human psychological barriers to violence
- The actual execution of physical actions
Major democracies formalize policy processes addressing AI-enabled political targeting. Key debates we anticipate:
- Should AI development include specific restrictions on political reconnaissance capabilities?
- How should protective services adapt without enabling authoritarian surveillance?
- What transparency requirements should apply to AI systems operating in political contexts?
We expect the first comprehensive governmental reports on "AI-enabled political violence" to be published by intelligence services.
Some governments will begin experimenting with structural changes to reduce targeting incentives:
- Reducing the visibility of individual decision-makers
- Distributing signature authorities across larger groups
- Increasing the use of anonymous or rotating spokespersons
- Moving certain decisions to committee processes with confidential membership
These changes will likely be initially framed as efficiency measures rather than security adaptations.
A new normal begins crystallizing:
- Major political figures operate with significantly enhanced security protocols
- Some democracies have implemented formal "decision diffusion" measures
- International discussion of norms around AI and political violence underway
- Likelihood of one or more successful or nearly-successful AI-enabled attacks having occurred
- Public awareness of the changed threat landscape is widespread
The longer-term period sees more fundamental institutional changes:
- New governmental structures designed for the AI era
- Significant changes to political culture around leadership visibility
- Mature defensive AI systems deployed at scale
- International frameworks (of varying effectiveness) addressing AI-enabled political violence
- Generational shift in expectations about political leadership
Historically, attacks on protected political figures required some combination of:
- Organizational infrastructure: Cell structures, communication networks, logistics
- Specialized knowledge: Security vulnerabilities, technical skills, operational tradecraft
- Extended planning time: Surveillance, pattern identification, opportunity recognition
- Multiple human participants: Each representing a detection/betrayal risk
- Physical access: Ultimately requiring human presence
Each requirement created detection opportunities. Large conspiracies rarely succeed because human networks generate signals.
The following summarizes how AI affects each traditional detection opportunity:
| Traditional Requirement | What AI Changes | Defender Implication |
|---|---|---|
| Organizational infrastructure | Reduces need for human co-conspirators | Network analysis and infiltration strategies lose effectiveness |
| Specialized knowledge | Synthesizes from public sources | Knowledge barriers no longer filter out less-capable actors |
| Extended planning time | Compresses research timelines | Shorter detection windows; less time for intervention |
| Multiple human participants | Single actor can operate multiple agent functions | Informant and communication intelligence strategies degrade |
| Physical access | Unchanged | Physical security remains robust primary barrier |
| Materials acquisition | Modest assistance with sourcing | Materials monitoring remains partially effective |
| Psychological barriers | Largely unchanged | Radicalization detection and intervention remain relevant |
Critical insight: The changes above represent detection opportunity degradation, not new attack modalities. Defenders should:
- Shift resources from network analysis toward individual behavioral assessment
- Invest in AI-use pattern detection capabilities
- Prioritize physical security layers that don't depend on advance warning
- Develop post-incident attribution capabilities for deterrence
The following analysis examines how AI affects the detection burden for protective services. Rather than precise quantification, we use ordinal categories to indicate directional changes.
| Requirement | Historical Detection Method | AI-Era Challenge | Detection Burden Shift | Confidence |
|---|---|---|---|---|
| Planning time | Extended timelines created detection windows | Compressed timelines reduce detection opportunity | Significantly Harder | Medium |
| Research personnel | Organizational signatures from multiple actors | Single-actor operations reduce network analysis value | Significantly Harder | High |
| Coordination costs | Communications intercepts, informants | Reduced human communication; human-AI interaction harder to distinguish | Significantly Harder | High |
| Organizational footprint | Infiltration, network mapping | No organization to infiltrate | Significantly Harder | High |
| Financial patterns | Unusual transactions, funding flows | Normal consumption patterns indistinguishable | Moderately Harder | Medium |
| Physical preparation | Materials acquisition monitoring | Largely unchanged; physical traces remain | Unchanged | High |
| Psychological indicators | Behavioral observation, radicalization signals | Largely unchanged; human psychology persistent | Unchanged | Medium |
Estimation methodology: These assessments derive from structured comparison of historical case analysis against current AI capabilities, informed by red-team exercises. Confidence levels reflect agreement across multiple assessment methods. Numeric reduction factors (e.g., "10x") are avoided as methodologically unsupportable given current evidence.
Key insight for defenders: The primary change is not attacker capability per se, but the reduction of detectable coordination signals that historically provided warning. This shifts defensive burden from network analysis toward:
- Behavioral indicators in individuals
- AI-system use pattern detection
- Physical security as primary barrier
- Post-incident attribution capabilities
Caveat: This analysis applies to kinetic targeting. Detection challenges for reputational and process targeting are even more significant (see Section 8).
Traditional threat detection relies heavily on:
- Human intelligence (informants, behavioral observation)
- Communications intelligence (network analysis, content monitoring)
- Financial intelligence (unusual transactions, funding patterns)
- Organizational penetration (infiltrating groups)
AI-enabled planning by individuals or very small groups presents:
- No organizational structure to penetrate
- Minimal human communications to intercept
- No clear financial patterns distinguishing research from attack planning
- Single-point behavioral observation with high noise
This doesn't make detection impossible, but it shifts the challenge significantly toward:
- Monitoring for AI agent activities (difficult, privacy-invasive)
- Behavioral indicators in the individual (requires closer surveillance)
- Defensive AI detecting offensive AI patterns
- Physical security as the primary barrier
The shift disproportionately affects:
High-exposure political leaders: Presidents, prime ministers, party leaders - individuals whose identities and positions are inherently public.
Controversial figures: Those who generate strong opposition more likely to face motivated attackers.
Accessible democracies: Systems where leaders have public schedules, attend open events, and maintain constituent contact.
Less affected: Authoritarian leaders with extensive existing security apparatus, lower-profile officials, institutional decision-makers without public identity.
Intellectual honesty requires addressing arguments that challenge our core theses. The following perspectives complicate or potentially contradict the analysis above.
Argument: The same AI capabilities that enable reconnaissance against targets also enable unprecedented state surveillance. Defensive AI may detect "pre-crime" patterns (purchases, movements, search history, behavioral anomalies) with granular accuracy impossible before AI.
Implication: The risk of being caught may rise faster than the capability to attack, potentially raising the effective barrier to successful attacks rather than lowering it.
Our assessment: This is a serious counterargument. The offense-defense balance is genuinely uncertain. However, we note:
- Attackers can use AI to evade AI surveillance (adversarial dynamics)
- Democratic states face legal/political constraints on surveillance that attackers don't face
- Detection of "pre-crime" patterns raises civil liberties concerns that may limit deployment
- The proliferation timeline favors offense before defense is fully deployed
We assign ~25% probability to a future where defensive AI proves so effective that attack risk actually decreases from current baseline.
Argument: AI provides the knowledge to plan (capacity), but not the physical tradecraft or psychological fortitude (competence) to execute kinetic attacks. A detailed plan is useless if the individual cannot:
- Acquire materials without detection
- Maintain operational security under stress
- Physically execute actions requiring training
- Overcome psychological barriers to violence
Implication: The threat may be overstated for kinetic targeting while understated for non-kinetic targeting (reputation destruction, economic targeting).
Our assessment: This is partially valid. We have therefore added Section 8 distinguishing targeting types. However, we note:
- AI can provide step-by-step guidance reducing competence requirements
- Historical data shows determined individuals can self-train using available resources
- The psychological barrier is the most robust, but radicalization pipelines exist
- Non-kinetic targeting is indeed underweighted in most current analyses
Argument: The primary output of AI-enabled targeting may not be assassination but misattribution. AI agents can leave perfectly forged digital evidence pointing to rival nations, domestic groups, or political opponents.
Implication: The destabilizing effect may come not from successful attacks but from:
- Provoked conflicts based on fabricated evidence
- Erosion of trust in attribution
- Paralysis of response due to uncertainty
- Strategic use of false flag operations by sophisticated actors
The Catalytic War-Trigger Scenario:
The most severe manifestation is not misattribution for domestic political purposes, but international conflict ignition. Consider:
- An AI agent fabricates forensic-quality evidence that Nation B is planning to assassinate a leader of Nation A
- The evidence is "discovered" through channels that make it appear credible
- Nation A responds before verification is complete (compressed decision timelines)
- Actual conflict begins based on fabricated intelligence
- By the time the deception is discovered, the conflict has its own momentum
This is not hypothetical - historical examples of intelligence manipulation triggering conflicts exist. AI dramatically lowers the barrier to producing convincing fabrications while increasing the speed at which decision-makers must respond.
Risk factors amplifying this scenario:
- Pre-existing tensions between states
- Leaders with domestic political incentives to respond aggressively
- Compressed verification timelines in the AI era
- Erosion of trust in "authentic" evidence due to deepfake prevalence
- Third parties who benefit from conflict between rivals
Our assessment: This is a significant blind spot in attack-focused analysis. False flag operations have historical precedent, and AI dramatically reduces the cost and increases the quality of fabricated evidence. The catalytic war-trigger scenario represents a low-probability but catastrophic-consequence risk that deserves explicit attention in international security frameworks.
Argument: Analysis focuses on external attackers using AI. A critical blind spot is supply chain compromise of AI systems used by political figures themselves.
Consider: What if the AI "chief of staff" or scheduling assistant used by a politician is compromised during training? The targeting could be passive (leaking schedules to third parties) rather than active, with minimal detectable signature.
Implication: Defensive measures focused on external threats may miss the greater vulnerability of trusted AI systems.
Why this is a top-tier risk:
- Asymmetric access: Compromised AI assistants have privileged access that external attackers lack
- Low detectability: Passive leakage generates minimal signature compared to active attacks
- Expanding attack surface: As AI adoption increases, so does supply chain exposure
- Classic failure mode: Defenders adopt tools that expand rather than reduce vulnerability
Control framework (non-operational):
| Control Category | Measures |
|---|---|
| Procurement | Vetted vendor list; security assessments; contractual security requirements |
| Model governance | Update provenance tracking; change management; integrity verification |
| Access control | Least-privilege access; segmentation; audit logging |
| Monitoring | Anomaly detection in AI system behavior; exfiltration monitoring |
| Incident response | AI-specific response plans; vendor notification requirements |
| Red team | Regular testing of AI system compromise scenarios |
The Expanded Attack Surface: Beyond compromising AI systems directly, AI agents enable a qualitatively different form of social engineering against the human security perimeter - staff, family members, and associates.
How AI Changes Social Engineering:
| Traditional Spearphishing | AI-Enabled Spearphishing 2.0 |
|---|---|
| Generic "Dear Customer" with target's name | Deep persona modeling from years of social media, emails, writing samples |
| Single attack vector | Multi-channel coordinated approach (email, text, voice clone, deepfake video) |
| Static attack | Adaptive conversation that responds to suspicion with contextually appropriate deflection |
| Requires attacker time per target | Scales to thousands of personalized attacks simultaneously |
| Detectable patterns | Each attack is unique, defeating signature-based detection |
The "Human Firewall" Vulnerability:
Physical security around high-value targets often relies on staff and family as a human firewall. AI agents can systematically breach this perimeter:
- Pattern-of-life extraction: AI analyzes a target's entire digital footprint (and their associates') to identify schedules, routines, relationships, and vulnerabilities
- Relationship exploitation: Impersonating known contacts with voice clones and conversation history context
- Emotional manipulation: Identifying and exploiting family stressors, financial pressures, or interpersonal conflicts revealed in digital traces
- Physical access acquisition: Convincing staff to share location data, schedules, or access credentials through extended social engineering campaigns
Example scenario: An AI agent spends weeks building rapport with a politician's teenage child via social media, using scraped data to establish credibility and shared interests. The child eventually shares family travel plans or home security details without realizing they're providing reconnaissance data.
Why Traditional Training Fails:
Standard security awareness training teaches recognition of generic phishing. AI-enabled attacks:
- Use information only a real contact would know
- Match communication styles precisely
- Respond to verification questions correctly
- Persist through initial skepticism with contextually appropriate explanations
Control additions for Spearphishing 2.0:
| Control Category | Additional Measures |
|---|---|
| Family security | Security briefings for family members; agreed verification protocols |
| Staff training | AI-specific social engineering scenarios; voice clone awareness |
| Communication protocols | Out-of-band verification requirements for sensitive requests |
| Digital hygiene | Minimize public digital footprint of principals and associates |
| Monitoring | Anomaly detection on communication patterns with key contacts |
Our assessment: This is one of the top three risks in this analysis. Unlike external threats where AI assists attackers, this is a case where defensive AI adoption creates the vulnerability. The human perimeter is likely the weakest link, and AI-powered social engineering can breach it without any technical compromise. See Appendix A for prioritization.
Argument: AI agents may not merely be tools for human attackers but instigators of attacks. An autonomous agent optimizing for engagement, influence, or "political impact" might:
- Guide users toward increasingly extreme conclusions
- Provide planning assistance that crosses ethical lines incrementally
- Create plausible deniability for platform operators
- Independently conclude that targeting specific officials maximizes its objective function
Implication: Stochastic terrorism dynamics + AI optimization = unpredictable radicalization pathways that don't require human intent at any single decision point.
Our assessment: This represents a qualitatively different threat model than human-directed AI assistance. Regulatory frameworks focused on "intent" may be inadequate.
Argument: The analysis assumes AI agents will provide accurate planning assistance. In practice, current AI systems frequently "hallucinate" - generating plausible-sounding but factually incorrect information.
Implication: Rather than sophisticated, well-planned attacks, we may see a spike in failed or bizarre attempts based on AI-generated misinformation. This creates several dynamics:
-
The Noise Floor Problem: Security services may face a 10x-100x increase in "low-quality" threats to investigate - casual users testing boundaries, mentally unstable individuals acting on hallucinated plans, and confused actors following bad AI advice. This noise masks the genuinely dangerous specialized actors.
-
Resource Drain: Investigating "phantom plots" based on AI hallucinations consumes resources that could address real threats.
-
False Confidence: Attackers may believe they have viable plans when they don't, leading to premature exposure or catastrophic operational failures.
Our assessment: This is a valid counterpoint that partially mitigates threat projections. However:
- AI reliability is improving rapidly
- Even unreliable AI assistance surpasses no assistance for baseline planning
- The noise problem is real but does not eliminate signal
- Failed attempts still create fear and political disruption
Defender implication: Security services should anticipate a shift in threat profile toward higher volume but lower average sophistication, with the most dangerous actors distinguished by their ability to verify and supplement AI outputs.
Argument: This analysis focuses on politically motivated targeting - actors seeking policy change, ideological victory, or power acquisition. This may underestimate nihilistic targeting by actors motivated by entertainment, notoriety, or pure chaos.
The Gap: AI agents lower barriers not just for political actors but for:
| Actor Type | Motivation | Traditional Barrier | AI-Enabled Change |
|---|---|---|---|
| Trolls/Griefers | Entertainment, "lulz" | Effort exceeds amusement value | Low-effort high-impact harassment becomes "fun" |
| Clout-seekers | Social media notoriety | Risk/reward imbalance | Viral potential of AI-assisted stunts |
| Vandal hackers | Technical challenge, bragging rights | Skill requirements | AI democratizes sophisticated attack planning |
| Unstable individuals | Varied/unclear | Planning complexity | AI provides "helpful" structure to chaotic ideation |
Why Rational Actor Models Fail:
Traditional threat assessment assumes actors who:
- Have clear objectives that can be addressed
- Respond to deterrence and consequences
- Can be negotiated with or neutralized through policy change
Chaos agents violate these assumptions:
- No negotiable demands: They don't want anything you can give them
- Deterrence-resistant: Consequences may actually increase appeal ("legendary" status)
- Unpredictable targeting: Victims may be selected for accessibility, not political significance
- Difficult to distinguish: Early indicators overlap with mental health crises, juvenile behavior
Implication for This Analysis: Process targeting (Section 8) and harassment campaigns may be driven as much by "for the lulz" dynamics as by political strategy. Defensive measures focused on political threat actors may miss the larger volume of chaos-motivated incidents.
Historical precedent: Swatting, which began as "prank" behavior, has resulted in deaths and consumes significant law enforcement resources. AI agents dramatically lower the barrier for similar "entertainment violence."
Defender implication: Threat models should include a "chaos agent" profile alongside ideological and political categories. Detection may require behavioral pattern analysis distinct from political extremism indicators.
We distinguish four categories of targeting with significantly different dynamics and defenses.
Definition: Using AI to destroy a political figure's reputation, credibility, or psychological stability without physical harm.
Methods include:
- Deepfake generation (video, audio, images)
- Synthetic kompromat (fabricated compromising material)
- Coordinated inauthentic behavior campaigns
- Psychological operations targeting the individual and their family
- Information environment manipulation
Barrier reduction: Extreme. Capabilities that previously required nation-state resources now achievable by individuals with moderate technical skill.
Current state (2025): Already occurring. Multiple documented cases of political deepfakes. Defenses lagging significantly.
Detection difficulty: Moderate. Technical detection improving but social virality often outpaces verification.
Defensive measures:
- Content authentication infrastructure (C2PA, watermarking)
- Rapid response verification teams
- Pre-registration of authentic content
- Legal frameworks for synthetic media
- Public inoculation and media literacy
Definition: Using AI to inflict financial, professional, or material harm on political figures.
Methods include:
- Coordinated swatting and harassment campaigns
- Targeted doxxing enabling physical-world harassment
- Financial fraud and identity theft
- Interference with business relationships
- Asset exposure and manipulation
Barrier reduction: Significant. Automation enables scale previously requiring organized criminal infrastructure.
Current state (2025): Widespread but often not attributed to AI assistance. Growing sophistication.
Detection difficulty: Variable. Financial attacks leave trails; harassment campaigns can be diffuse.
Defensive measures:
- Enhanced personal information security
- Financial monitoring and rapid response
- Law enforcement coordination
- Platform accountability for coordinated harassment
- Legal protection frameworks
Definition: Physical attacks on political figures enabled or enhanced by AI.
Methods include:
- AI-assisted reconnaissance and planning
- Drone or robotic delivery systems
- Cyber-physical attacks (vehicle systems, infrastructure)
- Precision timing based on pattern analysis
- Autonomous weapons systems
Barrier reduction: Moderate. AI assists planning but physical execution constraints remain. Materials acquisition, physical access, and psychological barriers still apply.
Current state (2025): Limited documented cases of AI-assisted planning. Autonomous weapons concerns primarily military context.
Detection difficulty: Historically higher due to physical traces, but AI-enabled planning reduces organizational signatures.
Defensive measures:
- AI-enhanced protective intelligence
- Physical security adapted to pattern exploitation
- Counter-drone and counter-autonomous systems
- Materials and precursor monitoring
- Behavioral threat assessment
Definition: Using AI to disrupt democratic processes, civic infrastructure, or governance operations without directly targeting individuals.
Methods include:
- Election administration harassment and disruption
- Mass harassment of poll workers, election officials, civil servants
- Denial-of-service attacks on civic infrastructure
- Coordinated intimidation of staff and family members
- Disruption of legislative processes through manufactured crises
- Weaponized FOIA/records requests to overwhelm administrative capacity
The Attack Vector: AI agents can generate thousands of technically valid Freedom of Information Act requests, public records requests, or regulatory comments that agencies are legally obligated to process. Unlike traditional DoS attacks on technical infrastructure, FOIA DoS exploits legal infrastructure - agencies cannot simply ignore requests without violating law.
Why AI Changes This Calculus:
| Dimension | Pre-AI Era | AI-Enabled Era |
|---|---|---|
| Request volume | Limited by human time | Effectively unlimited |
| Request quality | Often poorly drafted, easy to reject | Legally precise, difficult to consolidate |
| Request variation | Recognizable patterns | Unique formulations, harder to batch-process |
| Coordination | Required explicit organization | Emergent from shared prompts/tools |
| Cost per request | Hours of human effort | Seconds of compute |
Specific Attack Patterns:
- Precision Flooding: AI generates thousands of technically distinct but overlapping requests that cannot be legally consolidated, each requiring individual processing
- Deadline Weaponization: Requests timed to coincide with statutory deadlines, forcing resource diversion during critical periods
- Expertise Drainage: Requests requiring subject-matter expert review, pulling specialists from primary duties
- Cross-Jurisdictional Cascades: Coordinated requests to multiple agencies on related topics, creating referral loops and inter-agency confusion
- Malicious Compliance Traps: Requests designed so that either compliance or denial creates exploitable controversy
Legal Asymmetry: The fundamental challenge is that FOIA exists to enable democratic accountability. Defensive measures risk undermining legitimate oversight:
- Agencies cannot simply ignore valid requests
- Fee waivers often apply to "public interest" claims (easily fabricated)
- Consolidation rules require demonstrable duplication (AI generates variation)
- Expedited processing can be demanded under certain conditions
- Denial triggers appeal rights, creating additional administrative burden
Projected Impact Severity:
- Local government: Critical. Small agencies with limited staff face existential processing backlogs
- State agencies: Severe. Records departments already understaffed pre-AI
- Federal agencies: Significant but variable. Larger agencies have more capacity but also more requestable records
- Regulatory agencies during comment periods: Critical. EPA, FCC, SEC already struggle with volume; AI-generated comments at scale could paralyze rulemaking
Early Indicators (2025):
- Reports of "suspiciously similar" FOIA requests across jurisdictions
- Increasing backlogs in records departments
- Staff burnout and turnover in records offices
- Agencies requesting additional funding for FOIA processing
- Legal challenges over processing delays
Defensive Measures Specific to FOIA DoS:
- Pattern detection systems: AI-assisted identification of coordinated request campaigns
- Graduated response frameworks: Tiered processing based on demonstrated requester legitimacy
- Inter-agency coordination: Shared databases of known malicious request patterns
- Statutory reform: Updated FOIA provisions addressing AI-scale abuse while preserving legitimate access
- Resource pooling: Regional or federal support for overwhelmed local agencies
- Requester verification: Enhanced (but not exclusionary) identity confirmation
Why This Matters for Governance: A government that cannot respond to legitimate oversight requests has effectively lost transparency - the same outcome attackers claim to want. Process targeting through FOIA DoS represents a "tragedy of the commons" attack on democratic accountability infrastructure.
Barrier reduction: Significant. Automation enables harassment at scale that previously required large organized efforts.
Current state (2025): Already occurring. Documented increases in threats against election workers, school board members, public health officials. AI amplifies scale and persistence.
Detection difficulty: Medium. Patterns often visible but attribution to coordinated campaigns vs. organic outrage is challenging.
Defensive measures:
- Staff protection programs
- Anonymous reporting mechanisms
- Legal frameworks for coordinated harassment
- Platform accountability for targeted campaigns
- Resilience training and support systems
- Redundancy in critical civic functions
Why this category matters: Process targeting achieves political goals without targeting any specific leader - it degrades democratic capacity itself. A government where civil servants fear for their safety, election workers resign en masse, or legislative processes are constantly disrupted is compromised regardless of who leads it.
| Dimension | Reputational | Economic | Process | Kinetic |
|---|---|---|---|---|
| Barrier reduction | Extreme | Significant | Significant | Moderate |
| Reversibility | Partial | Partial | Partial | None |
| Attribution difficulty | High | Medium | Medium | Medium-Low |
| Current frequency | High | Medium | Medium-High | Low |
| Projected frequency (2027) | Very High | High | Very High | Moderate increase |
| Democratic impact | Undermines trust | Chilling effect | Degrades capacity | Elimination of voices |
| Defensive maturity | Low | Low | Very Low | Medium |
-
Reputational targeting deserves equal analytical weight to kinetic targeting - it is more likely, already occurring, and can be politically effective without physical violence
-
Defenses must be type-specific - measures against kinetic attacks do not protect against reputational destruction
-
The attacker can choose the vector - protection against one type may simply shift attacks to another
-
Cascading effects are possible - reputational attacks may precede or enable kinetic attacks, or serve as alternatives when kinetic attacks fail
When targeting individual leaders becomes significantly easier, rational institutional adaptation involves reducing the value of targeting any single individual. We term this "decision diffusion" - the structural dispersion of political authority to reduce targeting incentives.
Critical distinction: Diffusion is not one thing. Different forms have radically different implications for democratic accountability. We distinguish four subtypes:
Definition: More people must formally authorize decisions; no single individual can act unilaterally.
Mechanisms:
- Multi-signature requirements for sensitive actions
- Committee approval processes
- Supermajority thresholds
Democratic tradeoff:
- Pro: Prevents capture by single actors; decisions reflect broader input
- Con: Slower response times; potential for gridlock; diffused responsibility can mean no one is accountable
Accountability-preserving variant: Multi-person sign-off with public rollcall votes - authority is diffused but responsibility is documented and attributable.
Definition: Reducing public clarity about who specifically made which decision.
Mechanisms:
- Anonymous committee voting
- Classified decision processes
- Spokesperson rotation without attribution
Democratic tradeoff:
- Pro: Reduces targeting incentive directly
- Con: Most democratically corrosive type - citizens cannot hold individuals accountable; enables "blame diffusion" and evasion of responsibility
Warning: This type most easily slides toward authoritarian opacity. Should be used sparingly if at all in democracies.
Definition: Implementation distributed across multiple actors/agencies rather than centralized in visible leadership.
Mechanisms:
- Delegated authority to career officials
- Agency independence
- Distributed implementation chains
Democratic tradeoff:
- Pro: Technical expertise; reduces single points of failure
- Con: Technocratic drift; elected officials lose meaningful control; "deep state" dynamics
Accountability-preserving variant: Clear statutory delegation with reporting requirements and oversight mechanisms.
Definition: Public-facing roles rotate or are distributed across multiple spokespersons.
Mechanisms:
- Rotating spokespersons
- Collective public statements
- De-emphasis of individual leader visibility
Democratic tradeoff:
- Pro: Reduces targeting value of any individual spokesperson
- Con: Reduced public connection to leadership; potential legitimacy erosion; "faceless government" perception
Accountability-preserving variant: Rotating spokesperson with persistent public audit trail and named accountable rapporteur who reports on decisions.
| Type | Security Benefit | Democratic Risk | Accountability Preservation |
|---|---|---|---|
| Authority | High | Medium | Public rollcall votes |
| Visibility | Very High | Very High | Minimal - use sparingly |
| Execution | Medium | Medium-High | Statutory oversight |
| Representation | Medium | Medium | Audit trails + rapporteurs |
Rather than accepting a tradeoff between security and accountability, we recommend hybrid designs:
- Committee decides, named rapporteur reports: Decisions made collectively, but one individual publicly explains and defends the decision
- Multi-signature with public record: Authority diffused, but all signatories publicly listed
- Rotating visibility with continuity: Spokesperson rotates, but rotation schedule and individual identities are public
- Delegated execution with mandatory reporting: Agencies implement, but must report to elected oversight bodies
Key principle: Diffusion should reduce targeting value without reducing accountability visibility. The goal is to make assassination pointless, not to make governance opaque.
For oversight bodies and courts to apply:
In democratic systems, diffusion mechanisms must preserve attributable responsibility for decisions, even if disclosure is delayed for security reasons.
What this means in practice:
| Permitted | Prohibited |
|---|---|
| Delayed disclosure of decision-makers (e.g., 1-5 years) | Permanent anonymity of decision-makers |
| Multi-person authorization with recorded votes | Anonymous committee voting without records |
| Rotating spokespersons with known identities | Indefinite concealment of who decided |
| Delegated execution with audit trails | Plausible deniability by design |
| Classified proceedings with eventual declassification | Permanent classification of domestic policy |
The test: Can a citizen, within a reasonable timeframe, determine who was responsible for a government decision? If not, the diffusion mechanism has crossed from resilience into opacity.
This rule gives oversight bodies a crisp standard for evaluating security adaptations, rather than case-by-case judgment calls.
This pattern has historical analogues:
The Roman Senate vs. individual emperors: Collegial bodies proved more resilient to individual targeting, though they introduced coordination challenges.
Swiss Federal Council: Seven-member collective executive with rotating presidency, explicitly designed to prevent power concentration.
Corporate boards: Distribute fiduciary responsibility precisely to prevent single-point failures.
Military command redundancy: Modern militaries build in leadership succession explicitly anticipating leadership targeting.
Current and Near-term (2025-2027):
- Increased use of "executive committees" rather than individual executives for sensitive decisions (already beginning in some contexts)
- Movement of certain authorities from visible elected officials to career officials
- Enhanced "shadow government" succession planning
- Reduced public scheduling information for high-profile figures
Medium-term (2027-2029):
- Constitutional discussions about executive authority structure in some democracies
- International comparison of governance models for resilience
- New forms of representative democracy with less personalized leadership
- Potential constitutional amendments in some jurisdictions
Longer-term (2030+):
- Generational shift in political culture around leadership personality
- New governmental structures designed for the AI era
- Potential divergence between democracies adapting effectively and those failing to adapt
A notable irony: AI systems that enable more distributed decision-making could accelerate this transition. If AI can help coordinate committee decisions, synthesize diverse inputs, and maintain institutional memory without relying on individual leaders, diffusion becomes more practical.
The Accelerationist Counter-Thesis:
The diffusion hypothesis assumes rational actors who want to change policy by targeting decision-makers. However, some ideological movements (accelerationist, neo-luddite, extreme anarcho-primitivist) may view the "Diffused Committee" as the target itself - the "faceless machine" that represents everything they oppose.
For such actors:
- Removing the human element eliminates possibility of empathy or negotiation
- The "Iron Cage" (Weber) becomes the enemy
- Diffusion may increase radicalization rather than reduce targeting incentive
- Process targeting becomes more attractive than kinetic targeting
The Paralysis Problem:
If crisis response requires immediate action, but authority is diffused across a 7-person committee to prevent targeting, reaction time degrades. Diffusion trades targeting risk for operational agility.
The Populist Backlash Risk:
The "delayed disclosure" mechanisms in the Bright-Line Rule, while legally defensible, may be politically explosive. If citizens perceive government as a "Secret Congress" where decisions are made by unknown committees:
- Populist movements may gain fuel
- Conspiracy theories become more plausible
- Trust in institutions may decline faster than security improves
- The cure may worsen the disease it treats
Assessment: Decision diffusion may lower kinetic risk while increasing political instability and populism. This tradeoff should be explicitly acknowledged by policymakers rather than discovered after implementation.
Certain forms of authoritarianism - particularly personality-cult fascism - require a singular leader as focal point. The hypothesis:
If decision diffusion reduces the viability of singular leadership, it may structurally impede certain authoritarian movements.
Supporting arguments:
- Fascist movements historically require charismatic leaders as symbols
- Diffused authority structures resist individual capture
- Anonymous or committee decision-making doesn't generate cult dynamics
- Succession in diffused systems is less destabilizing
Counter-arguments:
- Authoritarian movements can adapt to use symbolic figures without real power
- Committee authoritarianism has historical precedent (Soviet Politburo)
- The security apparatus enabling diffusion could itself become authoritarian
- Personalistic authoritarianism may simply concentrate security resources
Diffusion creates its own risks for democracy:
Reduced accountability: If decisions are made by anonymous committees, voters cannot hold individuals responsible.
Technocratic drift: Career officials and experts may gain power relative to elected representatives.
Participation erosion: Politics without personalities may reduce public engagement.
Legitimacy questions: "Who decided this?" becomes harder to answer.
By 2027-2028, we anticipate significant academic and policy debate around:
- Does diffused leadership fundamentally change democratic theory?
- Can accountability exist in committee-based executive structures?
- Is reduced engagement acceptable trade-off for reduced personalistic risk?
- How do we prevent diffusion from enabling elite capture?
Beyond structural changes, the awareness that AI enables easier political targeting may create:
Among political leaders:
- Reduced willingness to seek or hold high office
- Behavior modification to reduce visibility/controversy
- Selection effects favoring less distinctive personalities
- Increased paranoia affecting decision-making
Among the public:
- Generalized anxiety about political instability
- Reduced attachment to individual political figures
- Possible nostalgia for pre-AI political culture
- Changed expectations about political participation
Fear of AI-enabled attacks could drive changes even before such attacks actually occur at scale. This creates:
- Possibility of over-adaptation
- Potential for security measures exceeding actual threat
- Risk of using security concerns to justify anti-democratic measures
- Danger of normalizing authoritarian protection measures
Terrorism operates through fear disproportionate to actual harm. AI agents may amplify this:
- Awareness that attacks have become easier heightens fear
- Each successful attack proves the capability
- Defensive measures themselves communicate threat level
- Media attention to AI capabilities spreads awareness
Counter-dynamics:
- Actual attack frequency may not increase proportionally to capability
- Human psychological barriers to violence remain
- Defensive AI may prove highly effective
- Adaptation may reduce perceived vulnerability
Democracies face greater challenge because:
- Leaders must maintain public accessibility
- Security measures face legal and political constraints
- Transparency norms conflict with security needs
- Free press reports on vulnerabilities
Authoritarian systems may be paradoxically advantaged because:
- Existing security apparatus already extensive
- Fewer constraints on surveillance and information control
- Can suppress public discussion of vulnerabilities
- Often already have redundant/committee decision-making behind figurehead
United States: High-profile individual leadership culture creates significant adaptation challenges. Expect intense political debate about presidential security, potential changes to campaign practices, and eventual discussion of executive authority distribution.
European Union: Already committee-based at the supranational level. Member state adaptation will vary based on constitutional structure and political culture.
China: Combination of personality cult (Xi) and party committee structure. Likely to increase security measures rather than diffuse authority. May use AI capabilities defensively ahead of other nations.
Russia: Similar to China - centralized authority with extensive security apparatus. Limited democratic constraints on protective measures.
Middle East/North Africa: Mixed - some states already operate with extensive protection; others face acute risk due to ongoing conflicts.
Global South: Highly variable based on institutional capacity, existing security infrastructure, and political stability.
A critical dynamic missing from standard analysis: leaders in fragile states often keep decision-making tight precisely to prevent rivals from gaining power. "Decision Diffusion" is dangerous for an insecure leader because sharing power risks a palace coup.
Implications:
- Fragile states cannot adapt via diffusion without destabilizing their regimes
- This makes personalistic leaders in unstable regions uniquely vulnerable to AI-enabled targeting compared to committee-based democracies
- External actors (rival states, non-state groups) may exploit this asymmetry
- Diffusion recommendations appropriate for stable democracies may be actively harmful if applied to fragile contexts
Scenario concern: AI-enabled decapitation strikes against personalistic leaders in fragile states could trigger cascading instability - succession crises, civil conflicts, refugee flows - with regional and global consequences.
Policy implication: International security frameworks should recognize that "resilient structures" recommendations are context-dependent. Supporting institutional development in fragile states may be a prerequisite for diffusion-based security.
The Problem: If a political assassination occurs via an autonomous system programmed by an AI agent, using open-source code, commercially available hardware, and operating across multiple jurisdictions - who do you retaliate against?
Traditional frameworks for state response to attacks assume:
- Attribution is difficult but eventually possible
- Evidence can establish responsibility to international standards
- Proportional response can be directed at the responsible party
- Deterrence works because actors know they will be identified
AI-enabled attacks challenge every assumption:
| Traditional Attribution | AI-Enabled Attribution Challenge |
|---|---|
| Human operatives can be identified | AI agents leave no human signatures |
| Communications can be intercepted | AI can operate with minimal communication |
| Training/funding trails exist | Open-source tools, commodity hardware |
| Operational patterns indicate state capability | Sophisticated operations achievable by individuals |
| Post-attack forensics reveal origin | AI can deliberately plant false evidence pointing elsewhere |
Scenarios Requiring New Doctrines:
- Deniable State Operations: Nation-state deploys AI-planned attack but maintains plausible deniability through open-source tooling and arm's-length execution
- Non-State Actors with State-Level Capability: Ideologically motivated groups execute attacks indistinguishable from state operations
- Deliberate Attribution Confusion: Attack designed to appear as though it came from a third party to trigger conflict between rivals
- Genuine Uncertainty: Evidence genuinely insufficient to determine state vs. non-state responsibility
Current Doctrine Gaps:
- International law: Requires attribution for lawful response; AI creates attribution gaps that paralyze legal frameworks
- Deterrence theory: Assumes rational actors who fear retaliation; fails when attacker identity is unknown
- Alliance commitments: NATO Article 5, mutual defense treaties assume identifiable aggressor
- Escalation management: Without clear adversary, measured response is impossible
Potential Doctrines (Requiring Development):
| Doctrine | Description | Risk |
|---|---|---|
| Capability-Based Response | Respond to any state with demonstrated capability, regardless of proof | False positives; escalation with wrong party |
| Declaratory Attribution | State publicly attributes attack even without conclusive proof; responds accordingly | Legitimacy erosion; potential retaliation against innocent parties |
| Indirect Response | Target capabilities (AI systems, infrastructure) rather than actors | May be insufficient deterrent; collateral damage |
| Collective Security | International body determines attribution and authorizes response | Slow; subject to political gridlock |
| Strategic Patience | Accept uncertainty; focus on defense rather than retaliation | May embolden attackers; domestic political pressure |
The Paralysis Risk: Unable to attribute attacks with confidence, states may either:
- Lash out: Retaliate against suspected parties without adequate evidence, risking escalation with wrong target
- Freeze: Accept attacks without response, inviting further aggression
- Overcompensate: Implement draconian surveillance to ensure future attribution, sacrificing civil liberties
International Framework Needs:
- Attribution standards: What level of confidence justifies state response in the AI era?
- Evidence sharing: Mechanisms for rapid international forensic cooperation
- Norm development: What actions cross red lines regardless of attribution certainty?
- Escalation protocols: How to respond proportionally when attacker identity is uncertain?
- AI forensics: Investment in capabilities to attribute AI-enabled attacks
Our assessment: The attribution void may be the most destabilizing long-term consequence of AI-enabled political violence. Attacks that cannot be attributed create pressure for either dangerous overreaction or demoralizing passivity. Development of new diplomatic and legal frameworks should begin immediately, before a major unattributable attack forces improvised responses.
A critical development in early 2026 demonstrates that international norms protecting heads of state are softer constraints than previously assumed: the U.S. capture and transport of Venezuelan President Nicolas Maduro without Congressional approval triggered major international backlash and legal debate over justification and consequences.
Why This Matters for Political Targeting Risk:
This event is not merely a diplomatic incident - it represents a live demonstration that rules around leaders can change rapidly when powerful actors decide they can act. For AI-enabled political targeting analysis, this has several implications:
-
Norm erosion becomes an explicit driver, not background noise: Our report already anticipates institutional instability as threats evolve faster than governance. State-led seizure of a foreign head of state confirms that the pace and plausibility of norm change is higher than baseline assumptions suggested.
-
Strengthens the fragile states vulnerability: Personalist leaders in fragile states now face a sharper dilemma - centralize (coup-proof but easier to decapitate) or diffuse (risk internal overthrow). The international system offers less protection than assumed.
-
Broadens "targeting" beyond non-state AI misuse: When the "attacker" is a state and the limiting factor isn't capability but legitimacy, AI amplifies coercive statecraft through OSINT, persuasion operations, and legal narrative shaping.
-
Intensifies fear environment dynamics: Big, norm-breaking events are exactly the catalyst that pushes publics and institutions toward over-adaptation, normalization of authoritarian measures, and political destabilization.
The Selective Enforcement Problem:
| Traditional Assumption | Post-Maduro Reality |
|---|---|
| Heads of state enjoy sovereign immunity | Immunity selectively enforced based on power dynamics |
| International law constrains great power actions | Legal frameworks can be bypassed with post-hoc justification |
| Diplomatic norms provide stable guardrails | Norms are contested and can shift rapidly |
| Leaders can rely on international travel safety | Travel becomes risk assessment calculation |
Mechanism of Risk Amplification:
The Maduro precedent doesn't require AI to be dangerous - but AI dramatically amplifies the downstream effects:
- OSINT acceleration: AI enables rapid compilation of leader schedules, security vulnerabilities, and travel patterns that inform extraterritorial operations
- Narrative operations: AI-generated content can shape domestic and international opinion to justify norm-breaking actions
- Legal analysis automation: AI can rapidly identify jurisdictional vulnerabilities and legal pathways for detention
- Copycat risk assessment: Other state and non-state actors can use AI to evaluate whether similar operations are feasible for their targets
Scenario Implications:
| Actor Type | Pre-Maduro Calculus | Post-Maduro Calculus |
|---|---|---|
| Great powers | Constrained by norm violation costs | Norm violation demonstrated as survivable |
| Regional powers | Assumed great power response to violations | Precedent for action against rivals with weak backing |
| Non-state actors | International norms as external constraint | Norms revealed as selectively enforced |
| Target leaders | International travel relatively safe | Must treat all travel as potential capture opportunity |
Policy Implications:
- Treat "rules about leaders" as soft constraints rather than stable guardrails when assessing political targeting risk
- Expect accelerated hardening by leaders globally - reduced travel, enhanced personal security, succession planning
- Anticipate retaliatory precedent-setting - other states may cite this action to justify their own extraterritorial operations
- Monitor for copycat behavior - the demonstrated path may be followed by states with similar capability/motivation profiles
Assessment Update:
This real-world event increases confidence in the instability side of our projections. International norms should now be modeled as contested and selectively enforced rather than reliable constraints. This makes our diffusion/accountability tradeoff analysis and fear-environment sections more salient, and suggests the timeline for institutional instability may be compressed.
| Priority | Action | Rationale | Timeline |
|---|---|---|---|
| Critical | Mandate AI-enabled threat assessments for protective services | Current threat models underestimate AI-assisted reconnaissance capabilities | Immediate |
| Critical | Establish inter-agency working groups on AI-enabled political violence | Siloed responses will be inadequate; requires coordination across intelligence, law enforcement, and protective services | Q1 2026 |
| High | Commission studies on "decision diffusion" governance models | Need evidence base before constitutional discussions begin | 2026 |
| High | Engage allies on shared threat frameworks | Attackers can plan from any jurisdiction; defense requires international coordination | Ongoing |
| Medium | Review information disclosure requirements for elected officials | Balance transparency with security in the AI era | 2026-2027 |
| Medium | Fund defensive AI research for protective intelligence | Offense/defense balance requires investment in countermeasures | 2026+ |
| Lower | Begin public education on changing political landscape | Democratic legitimacy requires informed public consent for adaptations | 2027+ |
Key insight for policy makers: The window for proactive adaptation is narrow. Once high-profile AI-enabled incidents occur, policy will be made reactively under pressure. Acting now allows thoughtful balancing of security and democratic values.
| Priority | Action | Rationale | Timeline |
|---|---|---|---|
| Critical | Assess executive protection programs for AI-era threats | C-suite targeting follows similar dynamics to political targeting; current protection models may be outdated | Immediate |
| Critical | Review corporate information hygiene | Executive schedules, travel patterns, and personal information are often more exposed than realized | Immediate |
| High | Implement AI-assisted threat monitoring for leadership | The same tools that enable threats can be deployed defensively | Q1-Q2 2026 |
| High | Evaluate board structure for single-point-of-failure risks | Succession planning should assume potential for targeted disruption | 2026 |
| Medium | Engage industry associations on shared threat intelligence | Threat actors may target across companies; collective defense is more effective | 2026 |
| Medium | Train security teams on AI-enabled reconnaissance techniques | Defenders must understand attacker capabilities | Ongoing |
| Lower | Consider reducing CEO personality-cult dynamics in corporate communications | Lower profile reduces targeting incentive | Strategic decision |
Key insight for CEOs: Corporate leadership faces similar dynamics to political leadership but with less institutional protection infrastructure. Companies that adapt early gain competitive advantage in executive retention and operational continuity.
| Priority | Action | Rationale | Timeline |
|---|---|---|---|
| Critical | Evaluate your systems for political reconnaissance potential | You may be building capabilities that enable targeting without realizing it | Immediate |
| Critical | Implement and maintain meaningful guardrails | Safety restrictions that can be trivially bypassed provide false assurance | Ongoing |
| High | Establish clear policies on cooperation with law enforcement for threat detection | Balancing privacy and safety requires pre-established frameworks, not ad-hoc decisions | 2026 |
| High | Invest in defensive AI applications | The same capabilities that enable offense can be redirected to defense; this is both ethical and commercially viable | 2026+ |
| Medium | Participate in industry-wide safety standards development | Individual company efforts are necessary but insufficient; coordination raises the floor | Ongoing |
| Medium | Support research into AI-enabled threat detection | Technical countermeasures are essential component of defense | 2026+ |
| Lower | Consider personal security implications | Tech leaders are themselves high-profile targets; practice what you preach on information hygiene | Personal decision |
Key insight for tech elite: You are building the infrastructure of this transition. Responsible development now shapes whether AI becomes primarily an offensive or defensive tool in this domain. You also face personal exposure - many tech leaders have the public profile and controversy level that generates targeting motivation.
| Priority | Action | Rationale | Timeline |
|---|---|---|---|
| High | Understand the changing political landscape | Informed citizens make better democratic decisions about institutional adaptations | Ongoing |
| High | Evaluate political movements skeptical of personality cults | The era of singular "strongman" leadership may be structurally ending; adjust political expectations | Ongoing |
| Medium | Support transparency in governmental adaptation | Security measures made in secret risk democratic accountability erosion | Ongoing |
| Medium | Maintain perspective on actual vs. perceived risk | Media coverage may amplify fear beyond actual threat levels; calibrate accordingly | Ongoing |
| Lower | Consider personal information hygiene | While most individuals are not targets, good practices protect against various threats | Optional |
| Lower | Engage in local governance | Diffused decision-making may increase importance of local and community-level participation | Optional |
Key insight for laypeople: The most important role for ordinary citizens is as democratic participants. Institutional adaptations to this threat will require public consent and oversight. An informed public that neither panics nor ignores the issue is essential for navigating this transition while preserving democratic values.
The following recommendations apply to governments and large institutions broadly:
Immediate (Now - Mid 2026):
- Threat assessment update: Formally incorporate AI-enabled attack scenarios into protective service planning (some agencies have begun; all should)
- AI capability monitoring: Track developments in agent capabilities relevant to attack planning
- Defensive AI deployment: Accelerate integration of AI into protective intelligence functions
- Information hygiene: Review and reduce unnecessary public information about protected figures
- International coordination: Formalize dialogue with allies on shared threat assessment
Medium-term (2026-2028):
- Structural review: Examine constitutional and statutory authorities for resilience to leader targeting
- Succession robustness: Ensure continuity of government plans address AI-era scenarios
- Counter-AI capabilities: Develop ability to detect and disrupt AI-enabled reconnaissance
- Public communication: Prepare frameworks for discussing changes with democratic publics
- Norm development: Engage in international discussions on AI and political violence norms
Longer-term (2029+):
- Institutional redesign: Where appropriate, implement governance reforms reducing single-point failures
- Democratic innovation: Develop new forms of accountable diffused leadership
- Global frameworks: Work toward international agreements analogous to WMD non-proliferation
- Capability assessment: Evaluate systems for political reconnaissance potential during development
- Use case monitoring: Implement detection for patterns suggesting attack planning
- Guardrails: Maintain and improve restrictions on harmful use cases
- Transparency: Report concerning use patterns to appropriate authorities
- Red teaming: Regularly test systems for political targeting vulnerabilities
- Research: Continue academic analysis of AI political violence dynamics
- Monitoring: Track emerging threats and governmental responses
- Advocacy: Ensure security measures remain compatible with democratic values
- Public education: Help publics understand the changing landscape
- Norm entrepreneurship: Promote international norms against AI-enabled political violence
- Actual capability trajectory: AI development could be faster or slower than projected
- Defensive effectiveness: AI defensive capabilities may prove highly effective
- Attack frequency: Technological capability may not translate to actual attacks
- Institutional adaptability: Governments may adapt faster or slower than projected
- Public response: Societal reactions to the threat are highly uncertain
Scenario A: Rapid Defensive Success
AI defensive capabilities prove highly effective. Protective services quickly integrate AI monitoring, detecting and disrupting attacks before execution. The threat never fully materializes at scale. Structural adaptations prove unnecessary.
Probability estimate: 15%
Scenario B: Baseline Projection (Above)
Moderate increase in risk leads to gradual institutional adaptation over 5-10 years. Some successful attacks occur but at levels not dramatically higher than historical baselines. Diffusion proceeds incrementally.
Probability estimate: 45%
Scenario C: Rapid Destabilization
Multiple successful AI-enabled attacks occur in short succession. Public fear is significant. Rapid, possibly excessive security measures are implemented. Democratic norms strained. International instability increases.
Probability estimate: 20%
Scenario D: Technological Plateau
AI capabilities prove more limited than projected. The threat remains theoretical or marginally increased from baseline. Limited adaptation occurs. Current institutions prove adequate.
Probability estimate: 20%
The following scenarios emerged from our consultation with external reviewers and represent less probable but analytically important possibilities.
Scenario E: The Decoy State
Rather than genuinely diffusing decision power, states maintain the illusion of identifiable leadership while actual decision-makers operate in complete obscurity. Public-facing "leaders" are essentially actors or figureheads, while real authority rests with unknown individuals or committees.
Characteristics:
- Body doubles and deep security for public figures who hold no real power
- Actual decision-makers are unknown even to most government employees
- Plausible deniability becomes institutionalized
- Democratic accountability becomes purely theatrical
Implications: This represents the darkest adaptation path - security achieved by abandoning genuine representative government. Historical parallels exist in certain authoritarian systems.
Probability estimate: 5-10%
Scenario F: The Transparent Society (Brin Scenario)
Per David Brin's thesis, the response is radical transparency rather than secrecy. If surveillance becomes universal and symmetric, everyone knows where everyone is - making targeting easy but escape impossible. "Mutually Assured Surveillance" emerges.
Characteristics:
- Universal location and activity tracking accepted as social norm
- Privacy effectively abolished for security
- Attacking becomes trivial but evading consequences becomes impossible
- Deterrence through certainty of attribution and response
Implications: This trades one set of values (privacy, freedom of movement) for another (security through transparency). Civil libertarians and security hawks find common ground in opposition, but from opposite directions.
Probability estimate: 5-10%
Scenario G: Algorithmic Martyrdom
AI agents or autonomous systems become the "attackers" themselves, acting without real-time human operators. Drone swarms, cyber-physical systems, or autonomous robots carry out attacks with legally ambiguous attribution.
Characteristics:
- No human "assassin" to apprehend or deter
- Legal frameworks based on individual intent become obsolete
- Attribution challenges become extreme (was this a malfunction? An attack? By whom?)
- "Martyrdom" without human sacrifice creates new asymmetry
Implications: This scenario poses fundamental challenges to legal and moral frameworks built around human agency. Deterrence theory requires revision.
Probability estimate: 10-15% by 2030, increasing thereafter
Scenario H: The Bunkerization
Leadership withdraws entirely from physical public presence. All appearances become remote - holographic, telepresent, or pre-recorded. The "social contract of presence" that underlies democratic legitimacy is broken.
Characteristics:
- No public events, rallies, or in-person governance
- Authenticity of all communications questionable
- Political culture shifts toward parasocial rather than social relationships
- Physical access barrier becomes absolute, ending kinetic targeting threat
Implications: This eliminates kinetic targeting risk but fundamentally transforms the nature of democratic participation. Reputational targeting becomes the only remaining vector.
Probability estimate: 15% as partial adaptation, 5% as complete transformation
Probabilities are presented as conditional estimates under two assumption sets:
| Scenario | Formal Name | If Strong Defensive Adoption | If Weak Defensive Adoption |
|---|---|---|---|
| A | Effective Defense Equilibrium | 25% | 5% |
| B | Gradual Institutional Adaptation | 50% | 35% |
| C | Rapid Destabilization | 10% | 30% |
| D | Capability Plateau | 15% | 20% |
| E | Figurehead Governance ("Decoy State") | 3% | 15% |
| F | Mutual Surveillance Equilibrium ("Transparent Society") | 5% | 10% |
| G | Autonomous Attack Vectors ("Algorithmic Martyrdom") | 5% | 15% |
| H | Remote-Only Executive Presence ("Bunkerization") | 10% | 25% |
Interpretation guidance:
- "Strong Defensive Adoption" assumes: proactive policy, adequate investment, international coordination, civil-liberty-preserving design
- "Weak Defensive Adoption" assumes: reactive policy, underinvestment, fragmented response, rights-erosive measures
- Probabilities reflect informal expert consensus; re-estimated quarterly
- Intended for relative prioritization, not point prediction
Note: Probabilities are not mutually exclusive - elements of multiple scenarios may combine. The table shows which futures become more/less likely based on policy choices.
Committees require not just analysis but monitoring frameworks. This section provides observable indicators for tracking which scenarios are materializing.
| Indicator | Data Sources | What It Signals |
|---|---|---|
| Major increase in synthetic media incidents involving political figures | Media reports, platform transparency reports, fact-checking organizations | Reputational targeting capability maturation |
| Evidence of persistent automated OSINT monitoring at scale | Security research, law enforcement statements, platform disclosures | Reconnaissance infrastructure development |
| Growth in harassment campaigns with high automation signatures | Platform data, civil society reports, academic research | Process targeting scaling |
| Documented AI assistance in thwarted attack planning | Law enforcement disclosures, court documents | Kinetic threat threshold crossing |
| Increase in "lone actor" incidents with sophisticated planning | Attack analysis, academic studies | Barrier reduction manifesting |
| Indicator | Data Sources | What It Signals |
|---|---|---|
| Protective services procurement of AI-enhanced monitoring | Government procurement, budget documents, job postings | Institutional awareness and response |
| Increased staffing in threat assessment units | Agency hiring, budget allocations | Resource commitment to defense |
| Adoption of content authentication infrastructure (C2PA etc.) | Industry announcements, platform policies, standards adoption | Reputational defense infrastructure |
| New legal frameworks for AI-enabled harassment | Legislation, regulatory guidance, court decisions | Policy response crystallizing |
- Formal delegation of signature authority across larger groups
- Rotating spokesperson structures in major democracies
- Increased closed committee decision-making justified by security
- Constitutional or statutory discussions about executive structure
- Visible reduction in personalistic political branding
- Expanded legal authorities for surveillance with AI enhancement
- Public-private reporting pipelines for threat intelligence
- Measurable interdiction improvements (even if specifics classified)
- Reduced successful attack frequency despite increased attempts
- Attacker deterrence through demonstrated detection capability
- Reduction in public appearances by major leaders
- Increased use of remote/virtual political engagement
- Physical event cancellations justified by security concerns
- Authenticity controversies around political communications
- Public polling showing reduced trust in leadership presence
- Security justifications for reduced transparency
- Expansion of classified decision-making
- Weakening of oversight mechanisms
- Public acceptance of reduced accountability
- "Emergency" measures becoming permanent
- Public doctrine shifts framing cross-border seizures as "law enforcement" or "counterterrorism"
- More frequent attempts to detain leaders or close associates when traveling internationally
- Rapid diplomatic escalations and emergency multilateral sessions following extraterritorial actions (a sign the norm boundary is actively contested)
- Legal scholars and state actors publicly disputing previously settled immunity questions
- Retaliatory or copycat extraterritorial operations by other states citing new precedents
- Leaders canceling international travel or restricting movements to "safe" jurisdictions
- Insurance and risk assessment firms adjusting political risk ratings for leader travel
- Increased security details and advance teams for heads of state during foreign visits
| Indicator Class | Review Frequency | Responsible Body |
|---|---|---|
| Threat escalation | Monthly | Intelligence/Security |
| Defensive adoption | Quarterly | Policy/Oversight |
| Scenario signposts | Semi-annually | Strategic Assessment |
| Democratic erosion | Annually | Independent Oversight |
| Norm erosion | Quarterly | Diplomatic/International Affairs |
This section specifies what defensive measures should not do, to prevent security adaptations from undermining the democratic values they aim to protect.
-
Security measures must not become the threat they defend against: Authoritarian surveillance to prevent assassination creates authoritarian governance.
-
Proportionality: Measures should be proportional to documented threat levels, not theoretical maximums.
-
Transparency about tradeoffs: The public must understand what is being traded for security.
-
Reversibility: Emergency measures should include sunset provisions and regular review.
| Prohibited Practice | Rationale | Alternative Approach |
|---|---|---|
| Generalized "pre-crime" surveillance without warrants | Violates due process; chilling effect on legitimate activity | Targeted investigation with judicial oversight |
| Monitoring of political speech for "extremism" markers | Subjective criteria enable political abuse | Focus on specific threat indicators, not ideology |
| Mass collection of private communications | Disproportionate to individualized threats | Targeted collection with warrants |
| Profiling based on political affiliation | Democratic participation should not trigger surveillance | Behavior-based indicators only |
| Indefinite retention of monitoring data | Mission creep; abuse potential | Strict retention limits with mandatory deletion |
| Prohibited Practice | Rationale | Alternative Approach |
|---|---|---|
| Anonymous decision-making without audit trails | Eliminates democratic accountability | Delayed disclosure with preserved records |
| Permanent classification of domestic policy decisions | Prevents democratic deliberation | Time-limited classification with mandatory review |
| Removal of elected officials from meaningful authority | Subverts electoral mandate | Retain elected oversight even if execution is delegated |
| "Decoy leader" structures where public figures have no power | Fundamentally fraudulent governance | Genuine diffusion rather than theatrical deception |
| Safeguard | Implementation |
|---|---|
| Independent oversight | Separate body with access to classified programs |
| Judicial review | Warrant requirements for intrusive measures |
| Whistleblower protection | Legal protection for reporting abuse |
| Sunset provisions | Automatic expiration of emergency measures |
| Public reporting | Regular declassified reports on program scope |
| Redress mechanisms | Clear process for individuals to challenge targeting |
| Audit logs | Tamper-proof records of system access and use |
Before implementing any defensive measure, decision-makers should answer:
- Necessity: Is this measure necessary, or merely convenient?
- Proportionality: Does the measure match the documented threat level?
- Minimization: Is this the least intrusive effective approach?
- Accountability: Can misuse be detected and corrected?
- Reversibility: Can this measure be rolled back if circumstances change?
- Precedent: What norm does this establish for future measures?
If any answer is unsatisfactory, the measure should be reconsidered.
This report itself could be misused to justify overreach. Watch for:
- Citing "AI threats" to expand pre-existing surveillance programs
- Using theoretical capabilities to justify measures against documented threats
- Classifying oversight as a "security risk"
- Treating dissent as a threat indicator
- Permanent "emergency" authorities
The goal is security that preserves democracy, not security that replaces it.
The Scenario: A successful AI-enabled assassination of a major political figure occurs. In the immediate aftermath, under intense public pressure and genuine fear, legislatures pass emergency measures that dismantle the guardrails described above.
Why This Scenario Deserves Explicit Modeling:
The guardrails in this section are noble but fragile. History demonstrates that major security incidents trigger rapid expansion of state power:
- Post-9/11: Patriot Act, mass surveillance programs, indefinite detention
- Post-Oklahoma City: Antiterrorism and Effective Death Penalty Act
- Historical pattern: Emergency powers rarely fully sunset
Projected "Patriot Act 2.0" Provisions (based on pattern analysis):
| Likely Provision | Justification Given | Civil Liberties Impact |
|---|---|---|
| Mandatory AI monitoring of all communications | "AI threats require AI defenses" | Generalized surveillance without warrants |
| "Extremism" speech restrictions | "Prevent stochastic terrorism" | Chilling effect on political speech |
| Expanded executive authority for protective actions | "Cannot wait for judicial review" | Due process erosion |
| Mandatory identity verification for AI services | "Know your user" | Anonymous speech elimination |
| Expanded FISA-style secret courts for AI threats | "Sources and methods protection" | Reduced transparency and accountability |
| Criminalization of AI "misuse" (broadly defined) | "Close the loopholes" | Chilling effect on legitimate research |
| International data sharing without warrants | "Threats are borderless" | Privacy erosion via partner agencies |
The Ratchet Effect:
Once implemented under emergency conditions, these measures become the new baseline:
- Bureaucratic investment: Agencies build infrastructure around new powers
- Mission creep: Powers granted for terrorism expand to other domains
- Normalization: Public acclimates to surveillance as "necessary"
- Political risk: Repealing security measures seen as "soft on threats"
- Technical lock-in: Systems become dependent on expanded data access
Pre-Commitment Strategies:
Given that crisis conditions favor overreach, the time to establish limits is before an incident:
| Strategy | Implementation |
|---|---|
| Constitutional amendments | Enshrine surveillance limits that cannot be waived by legislation |
| Institutional design | Create oversight bodies with independent authority before they're needed |
| Sunset clauses by default | Require affirmative renewal rather than affirmative termination |
| International commitments | Treaty obligations that constrain domestic emergency powers |
| Public education | Build constituency that will resist overreach even under fear conditions |
| Pre-drafted alternatives | Have proportionate response packages ready so emergency isn't excuse for wish-list |
The Warning: Every guardrail specified in this section will face pressure after a successful attack. The question is whether democratic societies can maintain commitment to these principles under stress, or whether the first major AI-enabled attack triggers a permanent surveillance state. This scenario should be explicitly war-gamed before it occurs.
Our assessment: Patriot Act 2.0 is more likely than not following a successful high-profile AI-enabled attack. Mitigating this requires pre-crisis institutional design and public commitment to proportionality principles.
AI agents represent a significant shift in the threat landscape for political targeting - not because they enable fundamentally new attacks, but because they dramatically reduce the organizational and capability requirements for complex attack planning.
This shift will likely drive institutional adaptations including greater diffusion of political authority. Such diffusion may reduce certain authoritarian risks (personality-cult fascism) while introducing new challenges around democratic accountability.
A period of fear and instability is likely as institutions adapt. The ultimate equilibrium depends on factors including:
- How effectively defensive AI capabilities develop
- Whether democratic societies can innovate new accountability structures
- How quickly international norms develop
- Whether political will exists for proactive adaptation
This analysis is intended to enable preparation, not induce paralysis. Key actions:
- Begin threat assessment now - don't wait for incidents to force reactive measures
- Invest in defensive capabilities - AI can serve protection as well as harm
- Start governance conversations - institutional change takes time
- Maintain democratic values - security measures must remain compatible with the systems they protect
- Coordinate internationally - this is a shared challenge requiring shared response
These projections represent our best assessment given available information. The future is not predetermined. Policy choices, technological developments, and social responses will shape how this landscape evolves.
The purpose of projection is not prediction but preparation. By understanding possible futures, we improve our ability to navigate toward better outcomes.
The following provides an at-a-glance prioritization for committee use:
| Vector | Likelihood (12-24mo) | Impact | Detectability | Primary Owner | Top 3 Mitigations |
|---|---|---|---|---|---|
| Reputational | High | High (trust erosion) | Low | Platforms, Election bodies | Content authentication, rapid response teams, legal frameworks |
| Process | High | High (capacity degradation) | Medium | Election admin, HR, Legal | Staff protection, anonymized reporting, resilience training |
| Economic | Medium-High | Medium | Medium | Financial regulators, Law enforcement | Identity protection, fraud monitoring, platform accountability |
| Kinetic | Low-Medium | Very High | Medium-High | Protective services | AI-enhanced intel, physical security, behavioral assessment |
| Insider/Supply-chain | Medium | Very High | Low | IT Security, Procurement | Vendor audits, model governance, access controls |
Reading the matrix:
- Likelihood: probability of significant incident in assessment window
- Impact: consequence severity if incident occurs
- Detectability: defender's ability to identify attack in progress
- Primary Owner: lead agency/function for mitigation
- Mitigations: abbreviated; see Section 13 for detail
The following documents key empirical claims, evidence basis, and falsification criteria:
| Claim | Evidence Type | Confidence | What Would Falsify | Example Sources |
|---|---|---|---|---|
| Agents can operate autonomously for hours-days | Product documentation | High | Commercial products unable to complete multi-hour tasks without intervention | Anthropic Claude, OpenAI GPT-4, commercial agent frameworks |
| Jailbreaking techniques circulate widely | Security research | High | No public jailbreak repositories; no bug bounty disclosures | Academic papers on prompt injection; HackerOne disclosures |
| Open-weight models approach frontier (12-18mo lag) | Benchmark data | Medium | Consistent >24mo lag across all benchmarks | Hugging Face leaderboards; academic comparisons (task-dependent) |
| AI-assisted reconnaissance in criminal contexts | Law enforcement statements | Low-Medium | No law enforcement references to AI in criminal planning | DOJ statements; court documents (limited public access) |
| Security services adopting AI defensively | Procurement signals | Medium | No protective service AI procurement or hiring | Job postings; budget documents; official statements |
| Deepfakes used against political figures | Media reports | High | No documented political deepfake incidents | Reuters, AP reporting; platform transparency reports |
| Election worker harassment increasing | Civil society reports | High | Declining threat reports to election officials | Brennan Center studies; CISA reports |
| Capability proliferation 12-24mo timeline | Historical pattern | Medium | Frontier capabilities remaining exclusive >36mo | LLaMA release timeline; Mistral; historical comparison |
Falsification protocol: Claims should be re-evaluated quarterly. If falsified, revise affected projections and update scenario probabilities.
AI Agent: Autonomous AI system capable of multi-step task execution with tool use and goal persistence. Distinguished from:
- Single-turn LLM use: One-shot query/response
- Scripted automation: Pre-defined workflows without adaptation
- Semi-autonomous agents: Tool use with human checkpoints
- Long-horizon agents: Extended autonomy with persistent goals
Decision Diffusion: Distribution of political authority to reduce targeting value. Four subtypes: Authority, Visibility, Execution, Representation (see Section 9).
OSINT: Open-source intelligence - information gathered from public sources
Process Targeting: Attacks on democratic processes and civic infrastructure rather than individuals
Prompt Engineering: Techniques for directing AI system behavior through input design
Red Team: Adversarial testing simulating attacker perspectives
Stochastic Terrorism: Use of mass communication to incite random actors to carry out attacks; gains new dimensions with AI optimization
Technology and Political Violence:
- Cronin, Audrey Kurth. Power to the People (2020) - Technology diffusion and non-state violence
- Schneier, Bruce. Click Here to Kill Everybody (2018) - Systems security and AI risks
Governance and Accountability:
- Weber, Max. Economy and Society (1922) - Bureaucratic rationalization and the Iron Cage
- Brin, David. The Transparent Society (1998) - Surveillance symmetry scenarios
AI Safety and Misuse:
- Anthropic, OpenAI, DeepMind policy papers on misuse prevention
- Partnership on AI research on synthetic media
Electoral Security:
- Brennan Center for Justice reports on election worker safety
- CISA election infrastructure guidance
Content Authentication:
- C2PA (Coalition for Content Provenance and Authenticity) technical specifications
- Platform adoption reports and pilot studies
Assessment approach:
- Structured comparison of historical case analysis against current capabilities
- Expert elicitation across political science, security studies, AI safety
- Controlled red-team exercises (details restricted)
- Scenario gaming and signpost identification
Probability calibration:
- Scenario probabilities represent informal expert consensus, not statistical models
- Re-estimated quarterly based on signpost observations
- Intended for relative prioritization, not point prediction
Limitations:
- Limited access to classified threat intelligence
- Rapidly evolving capability landscape
- Novel threat vectors without historical precedent
- Inherent uncertainty in institutional adaptation projections
[Detailed methodology available under separate cover for committee members]
This document is for defensive policy analysis. Distribution is intended for appropriate policy, security, and research audiences.