WannaCry Scenario: Transport and Shipping Crisis
WannaCry Scenario: Transport and Shipping Crisis
Planning Resources
Scenario Details for IMs
Hook
Initial Symptoms to Present:
Key Discovery Paths:
Detective Investigation Leads:
Protector System Analysis:
Tracker Network Investigation:
Communicator Stakeholder Interviews:
Mid-Scenario Pressure Points:
- Hour 1: Passenger-facing delay alerts diverge from trusted operations data
- Hour 2: Dispatch coordinators lose confidence in rolling-stock assignment integrity
- Hour 3: National media reports escalating transport instability and asks for formal updates
- Hour 4: Leadership demands a go/no-go decision for phased timetable restoration
Evolution Triggers:
- If segmentation lags, encryption reaches additional support systems and extends disruption scope
- If restoration order is unclear, service reliability remains degraded despite partial containment
- If communication cadence fails, workforce and passenger confidence erodes faster than technical progress
Resolution Pathways:
Technical Success Indicators:
- Team halts lateral movement with decisive host isolation and segmented recovery domains
- Critical scheduling and dispatch datasets are restored from validated clean backups
- Ticketing and information systems rejoin operations in controlled phases with monitoring gates
Business Success Indicators:
- Core rail services stabilize by the leadership checkpoint with manageable delay levels
- Passenger communications remain accurate enough to preserve trust and reduce crowding risk
- Regulatory and oversight engagement supports transparent but controlled incident handling
Learning Success Indicators:
- Team demonstrates how SMB worm dynamics interact with transport-sector dependency chains
- Participants align technical priorities with public-facing safety and reliability outcomes
- Group integrates incident command, operations continuity, and communication strategy under deadline pressure
Common IM Facilitation Challenges:
If Operational Deadline Pressure Is Underestimated:
“Containment is improving, but leadership confirms the checkpoint is 14:00. Which services must be trustworthy by then, and how do you prove it?”
If Passenger Impact Is Treated as Secondary:
“Your technical plan is sound, but station teams still cannot trust schedule updates. How are you preventing avoidable crowding and safety risk while recovery continues?”
Success Metrics for Session:
Template Compatibility
This scenario adapts to multiple session formats with appropriate scope and timing:
Quick Demo (35-40 minutes)
Structure: 2 investigation rounds, 1 decision round
Focus: Rapid containment and service-stabilization prioritization
Simplified Elements: Guided clues with constrained response choices
Key Actions: Isolate spread, protect dispatch data, communicate reliable service status
Lunch & Learn (75-90 minutes)
Structure: 4 investigation rounds, 2 decision rounds
Focus: Rail operations continuity under active ransomware disruption
Added Depth: Cross-region dependencies and phased restoration governance
Key Actions: Sequence recovery for scheduling and ticketing, hold passenger trust through accurate updates
Full Game (120-140 minutes)
Structure: 6 investigation rounds, 3 decision rounds
Focus: End-to-end incident command for national rail disruption
Full Complexity: Technical containment, authority coordination, and strategic resilience planning
Key Actions: Integrate cyber response and operations leadership to restore reliable service safely
Quick Demo Materials (35-40 min)
Guided Investigation Clues
- Clue 1 (Minute 5): “SMB exploitation is active across rail administrative domains connected to scheduling and ticketing services.”
- Clue 2 (Minute 10): “Critical dispatch datasets are partially encrypted; backup integrity is still unverified.”
- Clue 3 (Minute 15): “Passenger-impact risk grows as trusted operations data diverges from station-visible status systems.”
Pre-Defined Response Options
Option A: Hard Segmentation and Core Service Recovery
- Action: Isolate affected regional domains immediately, prioritize dispatch and schedule restoration, and delay non-essential digital services.
- Pros: Fast containment and clear protection of highest-priority operational data.
- Cons: Wider temporary outages in customer-facing tools and internal reporting platforms.
- Type Effectiveness: Strong against autonomous SMB worm propagation.
Option B: Controlled Continuity with Parallel Triage
- Action: Keep limited operations online while triaging infected hosts and restoring critical datasets in phases.
- Pros: Reduces immediate service shock and supports gradual stabilization.
- Cons: Requires tight discipline to avoid recontamination and false confidence.
- Type Effectiveness: Moderate when detection and segmentation execution remain consistent.
Option C: Passenger Operations First, Technical Recovery Second
- Action: Shift resources to manual dispatch support and passenger communication while technical teams continue containment.
- Pros: Maintains visible service continuity during early uncertainty.
- Cons: Extended technical risk if containment falls behind manual operations demands.
- Type Effectiveness: Indirect; limits business impact but does not neutralize malware spread by itself.
Lunch & Learn Materials (75-90 min, 2 rounds)
Round 1: Containment and Service Integrity (30-35 min)
Investigation clues:
- “Legacy host patch debt is concentrated in nodes linked to scheduling synchronization.”
- “Ticketing and dispatch systems share dependencies that amplify blast radius.”
- “Backup snapshots exist, but validation confidence is incomplete under time pressure.”
- “Operations leadership requests a recover-first list tied to passenger-impact risk.”
Facilitation questions:
- “What evidence defines a trustworthy schedule state before public release?”
- “Which dependencies must be isolated even if that temporarily worsens delay metrics?”
- “How do you coordinate executive, technical, and station communications in the same time window?”
Round 1→2 Transition
Containment progress narrows spread, but recovery quality now determines whether rail operations regain reliability or remain unstable under public scrutiny.
Round 2: Trustworthy Restoration Under Scrutiny (30-35 min)
Developments:
- “Partial recovery is available, but full confidence requires stricter data validation and sequencing.”
- “Passenger-impact reporting pressure rises as media and oversight requests accelerate.”
- “Leadership must choose between faster restart and lower risk of recurring disruption.”
Facilitation questions:
- “What minimum validation threshold is acceptable before restoring broader timetable publication?”
- “If recovery slips, which manual fallbacks protect safety and passenger confidence first?”
- “How do you communicate uncertainty without undermining operational authority?”
Full Game Materials (120-140 min, 3 rounds)
Round 1: Initial Disruption and Scope Control (30 min)
Rail operations support systems degrade quickly as ransomware spreads through connected administrative domains. Leadership sets a strict stabilization checkpoint and requests a unified containment plan.
Round 2: Recovery Sequencing and Public Confidence (35 min)
Core systems begin returning, but dependency and data-trust issues create hard decisions about restart pace and passenger communication accuracy.
Round 3: Strategic Resilience and Governance (35 min)
Immediate crisis pressure declines, and the focus shifts to long-term controls for patch governance, segmentation standards, and incident rehearsal across transport operations.
Debrief Focus (Full Game)
- How transport dependency chains magnify SMB worm impact under tight operational deadlines
- Why reliable communications are a technical output, not just a public-relations activity
- How regulatory and oversight expectations should shape restoration decision-making
- Which long-term controls best reduce recurrence risk without degrading service performance
Advanced Challenge Materials (150-170 min, 3+ rounds)
Red Herrings and Misdirection
- Legitimate data synchronization bursts that resemble malicious lateral movement
- Planned maintenance events that create concurrent alarm noise during incident triage
- Parallel outages in non-critical systems that distract from highest-risk dependencies
Removed Resources and Constraints
- No external incident-response augmentation during the first decision cycle
- Incomplete asset ownership mapping across regional operations environments
- Limited backup verification capacity under active service-pressure timelines
Enhanced Pressure
- Oversight requests accelerate before technical certainty is complete
- Passenger trust degrades as visible delays outpace confirmed status updates
- Leadership must defend recovery pacing decisions under national scrutiny
Ethical Dilemmas
- Whether to authorize disruptive containment that worsens short-term delays but reduces systemic risk
- Whether to publish uncertain restoration timelines to calm passengers or wait for stronger evidence
- Whether to prioritize high-traffic corridors first or equitable service restoration across all regions
Advanced Debrief Topics
- Ethics of risk communication during public transport cyber incidents
- Governance tradeoffs between rapid restart and defensible technical assurance
- Long-term resilience design for nationally critical rail operations