WannaCry Scenario: Transport and Shipping Crisis

WannaCry Scenario: Transport and Shipping Crisis

National Rail Services: UK rail operator, 15,000 employees, 1.5M daily passengers
Transport Operations Ransomware • WannaCry
STAKES
Passenger safety confidence + Rail operations continuity + Public transport trust
HOOK
National Rail Services experiences simultaneous control-room workstation lockouts, ticketing backend failures, and schedule database encryption during peak operations. Dispatch coordinators lose visibility into rolling stock assignments, station teams cannot validate service updates, and support desks receive escalating incidents from multiple regions.
PRESSURE
  • Service stabilization checkpoint at 14:00
  • Ongoing disruption cost of GBP 8M per day with potential impact on 15,000 employees operations staff
FRONT • 120 minutes • Advanced
National Rail Services: UK rail operator, 15,000 employees, 1.5M daily passengers
Transport Operations Ransomware • WannaCry
NPCs
  • Richard Blackwood (CEO): Managing national pressure over service reliability and commuter confidence
  • Priya Sharma (CTO): Coordinating containment while rail IT services fail across regions
  • James Mitchell (CISO): Leading technical triage, restoration order, and incident reporting obligations
  • Dr. Eleanor Crawford (Operations Director): Escalating schedule integrity and station coordination risk during active disruption
SECRETS
  • Legacy administrative systems remained on deferred SMB patch cycles
  • Scheduling and ticketing dependencies created single points of failure across operations workflows
  • Cross-region connectivity prioritized performance over segmentation and blast-radius control

Planning Resources

Tip📋 Comprehensive Facilitation Guide Available

For detailed session preparation support, including game configuration templates, investigation timelines, response options matrix, and round-by-round facilitation guidance, see:

WannaCry Transport & Shipping Crisis Planning Document

Planning documents provide 30-minute structured preparation for first-time IMs, or quick-reference support for experienced facilitators.

Note🎬 Interactive Scenario Slides

Ready-to-present RevealJS slides with player-safe mode, session tracking, and IM facilitation notes:

WannaCry Transport Scenario Slides

Press ‘P’ to toggle player-safe mode • Built-in session state tracking • Dark/light theme support

Scenario Details for IMs

Hook

Initial Symptoms to Present:

Warning🚨 Initial User Reports
  • “Dispatch support screens display ransom notes instead of train assignment data”
  • “Ticketing and passenger information systems return stale or missing updates”
  • “Regional control centers report schedule files becoming inaccessible”
  • “Incident volume spikes as stations lose confidence in real-time operations data”

Key Discovery Paths:

Detective Investigation Leads:

  • Endpoint and network logs reveal SMB-driven lateral movement across operations support domains
  • Forensic review confirms encryption focused on scheduling, ticketing, and dispatch administrative data stores
  • Timeline reconstruction identifies patch debt on legacy hosts as the primary expansion enabler

Protector System Analysis:

  • Telemetry shows rapid spread from office IT into rail operations support environments
  • Segmentation tests reveal weak controls between regional operations and central service platforms
  • Restoration plans exist but sequencing gaps risk delayed service stabilization

Tracker Network Investigation:

  • Lateral movement maps indicate repeated SMB scanning against shared infrastructure nodes
  • Dependency analysis highlights concentration risk in timetable and ticketing synchronization services
  • Uptime monitoring shows growing mismatch between published schedules and trusted back-end data

Communicator Stakeholder Interviews:

  • Operations leadership confirms station teams need verified schedules to avoid cascading delays
  • Security leadership confirms regulatory and authority notifications may be required during active containment
  • Executive leadership requests clear recovery checkpoints tied to passenger-impact decisions

Mid-Scenario Pressure Points:

  • Hour 1: Passenger-facing delay alerts diverge from trusted operations data
  • Hour 2: Dispatch coordinators lose confidence in rolling-stock assignment integrity
  • Hour 3: National media reports escalating transport instability and asks for formal updates
  • Hour 4: Leadership demands a go/no-go decision for phased timetable restoration

Evolution Triggers:

  • If segmentation lags, encryption reaches additional support systems and extends disruption scope
  • If restoration order is unclear, service reliability remains degraded despite partial containment
  • If communication cadence fails, workforce and passenger confidence erodes faster than technical progress

Resolution Pathways:

Technical Success Indicators:

  • Team halts lateral movement with decisive host isolation and segmented recovery domains
  • Critical scheduling and dispatch datasets are restored from validated clean backups
  • Ticketing and information systems rejoin operations in controlled phases with monitoring gates

Business Success Indicators:

  • Core rail services stabilize by the leadership checkpoint with manageable delay levels
  • Passenger communications remain accurate enough to preserve trust and reduce crowding risk
  • Regulatory and oversight engagement supports transparent but controlled incident handling

Learning Success Indicators:

  • Team demonstrates how SMB worm dynamics interact with transport-sector dependency chains
  • Participants align technical priorities with public-facing safety and reliability outcomes
  • Group integrates incident command, operations continuity, and communication strategy under deadline pressure

Common IM Facilitation Challenges:

If Operational Deadline Pressure Is Underestimated:

“Containment is improving, but leadership confirms the checkpoint is 14:00. Which services must be trustworthy by then, and how do you prove it?”

If Passenger Impact Is Treated as Secondary:

“Your technical plan is sound, but station teams still cannot trust schedule updates. How are you preventing avoidable crowding and safety risk while recovery continues?”

If Regulatory and Authority Coordination Is Delayed:

“Oversight agencies are requesting status. What can you communicate now that is accurate, actionable, and aligned with incident reality?”

Success Metrics for Session:

Template Compatibility

This scenario adapts to multiple session formats with appropriate scope and timing:

Quick Demo (35-40 minutes)

Structure: 2 investigation rounds, 1 decision round
Focus: Rapid containment and service-stabilization prioritization
Simplified Elements: Guided clues with constrained response choices
Key Actions: Isolate spread, protect dispatch data, communicate reliable service status

Lunch & Learn (75-90 minutes)

Structure: 4 investigation rounds, 2 decision rounds
Focus: Rail operations continuity under active ransomware disruption
Added Depth: Cross-region dependencies and phased restoration governance
Key Actions: Sequence recovery for scheduling and ticketing, hold passenger trust through accurate updates

Full Game (120-140 minutes)

Structure: 6 investigation rounds, 3 decision rounds
Focus: End-to-end incident command for national rail disruption
Full Complexity: Technical containment, authority coordination, and strategic resilience planning
Key Actions: Integrate cyber response and operations leadership to restore reliable service safely

Quick Demo Materials (35-40 min)

Guided Investigation Clues

  • Clue 1 (Minute 5): “SMB exploitation is active across rail administrative domains connected to scheduling and ticketing services.”
  • Clue 2 (Minute 10): “Critical dispatch datasets are partially encrypted; backup integrity is still unverified.”
  • Clue 3 (Minute 15): “Passenger-impact risk grows as trusted operations data diverges from station-visible status systems.”

Pre-Defined Response Options

Option A: Hard Segmentation and Core Service Recovery

  • Action: Isolate affected regional domains immediately, prioritize dispatch and schedule restoration, and delay non-essential digital services.
  • Pros: Fast containment and clear protection of highest-priority operational data.
  • Cons: Wider temporary outages in customer-facing tools and internal reporting platforms.
  • Type Effectiveness: Strong against autonomous SMB worm propagation.

Option B: Controlled Continuity with Parallel Triage

  • Action: Keep limited operations online while triaging infected hosts and restoring critical datasets in phases.
  • Pros: Reduces immediate service shock and supports gradual stabilization.
  • Cons: Requires tight discipline to avoid recontamination and false confidence.
  • Type Effectiveness: Moderate when detection and segmentation execution remain consistent.

Option C: Passenger Operations First, Technical Recovery Second

  • Action: Shift resources to manual dispatch support and passenger communication while technical teams continue containment.
  • Pros: Maintains visible service continuity during early uncertainty.
  • Cons: Extended technical risk if containment falls behind manual operations demands.
  • Type Effectiveness: Indirect; limits business impact but does not neutralize malware spread by itself.

Lunch & Learn Materials (75-90 min, 2 rounds)

Round 1: Containment and Service Integrity (30-35 min)

Investigation clues:

  • “Legacy host patch debt is concentrated in nodes linked to scheduling synchronization.”
  • “Ticketing and dispatch systems share dependencies that amplify blast radius.”
  • “Backup snapshots exist, but validation confidence is incomplete under time pressure.”
  • “Operations leadership requests a recover-first list tied to passenger-impact risk.”

Facilitation questions:

  • “What evidence defines a trustworthy schedule state before public release?”
  • “Which dependencies must be isolated even if that temporarily worsens delay metrics?”
  • “How do you coordinate executive, technical, and station communications in the same time window?”

Round 1→2 Transition

Containment progress narrows spread, but recovery quality now determines whether rail operations regain reliability or remain unstable under public scrutiny.

Round 2: Trustworthy Restoration Under Scrutiny (30-35 min)

Developments:

  • “Partial recovery is available, but full confidence requires stricter data validation and sequencing.”
  • “Passenger-impact reporting pressure rises as media and oversight requests accelerate.”
  • “Leadership must choose between faster restart and lower risk of recurring disruption.”

Facilitation questions:

  • “What minimum validation threshold is acceptable before restoring broader timetable publication?”
  • “If recovery slips, which manual fallbacks protect safety and passenger confidence first?”
  • “How do you communicate uncertainty without undermining operational authority?”

Full Game Materials (120-140 min, 3 rounds)

Round 1: Initial Disruption and Scope Control (30 min)

Rail operations support systems degrade quickly as ransomware spreads through connected administrative domains. Leadership sets a strict stabilization checkpoint and requests a unified containment plan.

Round 2: Recovery Sequencing and Public Confidence (35 min)

Core systems begin returning, but dependency and data-trust issues create hard decisions about restart pace and passenger communication accuracy.

Round 3: Strategic Resilience and Governance (35 min)

Immediate crisis pressure declines, and the focus shifts to long-term controls for patch governance, segmentation standards, and incident rehearsal across transport operations.

Debrief Focus (Full Game)

  • How transport dependency chains magnify SMB worm impact under tight operational deadlines
  • Why reliable communications are a technical output, not just a public-relations activity
  • How regulatory and oversight expectations should shape restoration decision-making
  • Which long-term controls best reduce recurrence risk without degrading service performance

Advanced Challenge Materials (150-170 min, 3+ rounds)

Red Herrings and Misdirection

  • Legitimate data synchronization bursts that resemble malicious lateral movement
  • Planned maintenance events that create concurrent alarm noise during incident triage
  • Parallel outages in non-critical systems that distract from highest-risk dependencies

Removed Resources and Constraints

  • No external incident-response augmentation during the first decision cycle
  • Incomplete asset ownership mapping across regional operations environments
  • Limited backup verification capacity under active service-pressure timelines

Enhanced Pressure

  • Oversight requests accelerate before technical certainty is complete
  • Passenger trust degrades as visible delays outpace confirmed status updates
  • Leadership must defend recovery pacing decisions under national scrutiny

Ethical Dilemmas

  • Whether to authorize disruptive containment that worsens short-term delays but reduces systemic risk
  • Whether to publish uncertain restoration timelines to calm passengers or wait for stronger evidence
  • Whether to prioritize high-traffic corridors first or equitable service restoration across all regions

Advanced Debrief Topics

  • Ethics of risk communication during public transport cyber incidents
  • Governance tradeoffs between rapid restart and defensible technical assurance
  • Long-term resilience design for nationally critical rail operations