Code Red Scenario: University Technology Services Crisis (2001)
Planning Resources
Scenario Details for IMs
Code Red Historical Case Study: University Infrastructure During 2001 Internet Worm Outbreak
Detailed Context
Organization Profile
Type: Public state university providing undergraduate and graduate education, operating comprehensive research programs across sciences, engineering, humanities, and social sciences, delivering summer session courses for degree completion and high school dual enrollment.
Size: 15,000 students (12,800 fall/spring enrollment, 4,200 summer session), 2,400 employees including 850 faculty members teaching courses and conducting research, 650 administrative staff managing enrollment, financial aid, facilities, and student services, 380 IT staff supporting campus network and academic technology, 520 support personnel.
Operations: Academic instruction across 65 degree programs, research grants totaling $42 million annually from NSF, NIH, DoD, and private foundations, summer session generating $8.5 million tuition revenue critical for annual budget, student services including housing (4,800 residents), dining, health services, library resources, operating 180 IIS-based web servers hosting department websites, course management systems, research project sites, administrative portals.
Critical Services: Summer session course delivery for 4,200 enrolled students (many graduating seniors needing final credits), research data infrastructure supporting 28 active grant-funded projects with deliverable deadlines, student services web portals for enrollment, financial aid, housing assignments, academic department websites serving as primary communication channel with prospective students and parents.
Technology Infrastructure: Decentralized IT architecture—individual departments independently manage web servers, minimal central coordination of security updates, IIS chosen by departments for “ease of use and Windows compatibility,” campus network connecting 180 IIS servers across academic buildings with shared internet connection, backup systems limited to critical administrative data (research and course sites not included in backup scope).
Current Period: Mid-summer session (July 2001)—courses in progress for 4,200 students, research labs operating at full capacity with graduate students conducting experiments for grant deliverables, IT staff reduced to skeleton crew (many on summer vacation), new student orientation beginning in 3 weeks requiring functional web infrastructure.
Key Assets & Impact
Academic Operations & Summer Session: 4,200 summer students enrolled in courses requiring online syllabus access, assignment submissions, grade posting through course management systems—560 graduating seniors need summer credits to complete degrees for August commencement, international students on F-1 visas require continuous enrollment (disruption threatens visa status), dual enrollment high school students earning college credits (program generates $1.2M revenue), Code Red infection degrading server performance threatens course delivery during compressed summer schedule where falling behind cannot be recovered.
Research Infrastructure & Grant Compliance: 28 active research grants with deliverable deadlines—NSF grants requiring data repository access for multi-institution collaborations, DoD-funded research with quarterly milestone reporting deadlines in 2 weeks, NIH clinical trial data collection systems serving 340 study participants, private foundation grants with specific summer research benchmarks tied to continued funding, server downtime delays research progress risking grant compliance violations, missed deliverables trigger funding holds affecting graduate student stipends and research operations.
University Reputation & Public Trust: Prospective student recruitment depends on department websites—fall admission cycle ongoing, parents researching university for children’s college applications, 2,800 high school juniors scheduled for July campus tours expecting access to program information, university’s 180 infected servers participating in coordinated attack against White House website creating national media attention, being identified as source of attacks damages institution’s technology credibility and academic reputation.
Immediate Business Pressure
Thursday, July 19, 2001 - Morning of Internet-Wide Infrastructure Crisis:
Director of University Technology Services Robert Martinez discovered Code Red worm had infected 180 IIS web servers across campus during overnight hours. Worm was actively scanning internet addresses, participating in coordinated DDoS attack against government websites, and degrading server performance affecting course management systems and research infrastructure.
Security mailing lists confirmed this was internet-wide threat—Code Red exploiting buffer overflow in IIS, spreading to vulnerable systems globally, coordinated to attack specific government targets on specific dates. Media reporting university servers among attack sources. University President’s office demanding immediate response.
Patching required taking servers offline—each department’s web infrastructure managed independently, coordination across 65 academic units needed, IT summer skeleton crew (12 staff instead of usual 38) managing campus-wide response, estimated 48-72 hours for complete remediation.
Critical Timeline: - Current moment (Thursday morning, July 19): Worm discovered, 180 servers infected, participating in attacks against federal infrastructure - Stakes: 4,200 summer students depending on course systems, 28 research grants with deliverables at risk, national media identifying university as attack source - Dependencies: Decentralized IT means coordinating 65 department-managed servers, skeleton summer staff, academic operations cannot pause during remediation
Cultural & Organizational Factors
Academic freedom culture enabled decentralized IT management: University tradition values departmental autonomy—when central IT proposed standardized server management and mandatory security updates, faculty governance rejected proposal citing “academic independence” and “research flexibility.” Academic departments defended authority to manage own technology: professors need control over research infrastructure, standardization conflicts with specialized academic software, centralized policies slow down research timelines. Decision reflected institutional values—academic freedom is core university principle, faculty authority over resources is governance norm, research requirements vary by discipline (one-size-fits-all policies don’t work). Result: 65 independent IT silos, inconsistent patching practices, no central security oversight. Code Red exploited this decentralized architecture.
Summer budget constraints reduced IT security staffing: University operates on 9-month academic calendar budget—IT staff encouraged to take summer vacation “when campus is quiet,” security monitoring reduced during summer months, emergency response capabilities minimized by skeleton crew. Budget office decision: summer is low-activity period (fewer students, less support needed), reduced staffing saves overtime costs, IT staff deserve vacation after academic year intensity. Decision made fiscal sense—summer operating budget 40% lower than academic year, reduced campus population means lower support demand, staff retention requires reasonable vacation policies. Reality: Code Red struck during minimum IT staffing when response capacity was lowest.
“Accessibility over security” academic network philosophy: University culture prioritizes open access—when IT proposed network segmentation between academic and administrative systems, leadership rejected as “contrary to collaborative research mission.” Academic values: knowledge sharing requires open networks, research collaboration needs seamless connectivity, restrictive security hinders academic inquiry. Decision reflected educational mission—universities exist to share knowledge freely, academic networks historically more open than corporate environments, research requires connecting diverse systems and external collaborators. Flat network architecture meant one infected department server could spread to entire campus. Code Red propagated through unsegmented infrastructure.
Department-level budget authority prevented coordinated infrastructure investment: Decentralized budgeting model—each academic department controls own operating funds, central IT funded only for basic network infrastructure, departments purchase and manage own servers independently. Finance structure: state funding allocated by college/department enrollment, units prioritize discipline-specific needs (lab equipment, research software) over IT security, central mandates without central funding create unfunded requirements. Department chairs chose: spend on faculty research support (core mission) versus IT security infrastructure (invisible to external reviewers, doesn’t affect grant competitiveness). Security investment competed against academic priorities. Departments chose academic mission, created security gaps.
Operational Context
Universities in 2001 operated under “internet as educational opportunity” paradigm—early web adoption for distance learning, research collaboration, student services modernization. Academic culture valued accessibility and openness over security restrictions. IIS chosen by departments for “user-friendly” Windows integration, minimal security expertise among academic IT staff (hired for teaching technology support, not cybersecurity).
Decentralized IT management reflected academic governance—departments controlled own budgets and technology decisions, central IT provided network backbone but no authority over departmental servers, faculty governance protected autonomy from “administrative overreach.” Result: 180 independently managed IIS servers with inconsistent security practices.
Summer operations created perfect vulnerability window—reduced staffing, ongoing summer session preventing maintenance downtime, “patch in fall before students return” annual pattern. Security updates deferred until fall meant servers vulnerable during summer months when Code Red emerged.
Historical context: July 2001 preceded modern security frameworks—no NIST cybersecurity standards, no higher education ISAC for threat intelligence sharing, no executive orders for critical infrastructure protection. Universities viewed themselves as educational institutions, not cyber targets. Security was IT department concern, not institutional priority.
Code Red revealed structural vulnerabilities in academic IT governance—decentralized management prevented coordinated response, academic freedom culture resisted central security authority, budget models created unfunded security mandates. Worm exploited gap between academic values (openness, autonomy, accessibility) and security requirements (control, standardization, restrictions).
Key Stakeholders
- Robert Martinez (Director of University Technology Services) - Managing campus-wide response with skeleton summer crew while coordinating 65 independent department IT operations
- Dr. Patricia Anderson (Provost) - Balancing academic continuity for 4,200 summer students with institutional reputation damage from participating in attacks against federal government
- Dr. James Wilson (VP for Research) - Protecting $42M in research grants with deliverable deadlines while research infrastructure undergoes emergency patching
- Sarah Chen (Dean of Students) - Maintaining summer session operations for students depending on course systems, including 560 graduating seniors needing credits for August commencement
- Michael Foster (University President) - Managing media crisis as university identified as attack source, responding to governor’s office inquiries about state institution participating in attacks against White House
Why This Matters
You’re not just responding to historical malware outbreak—you’re experiencing the 2001 Code Red incident that transformed how academic institutions understand cybersecurity, revealing fundamental tensions between academic values of openness and autonomy versus security requirements for control and standardization. Your incident response decisions reflect actual choices university leaders faced: protect academic operations and research continuity versus stop participating in attacks against federal infrastructure, respect departmental autonomy versus impose central security authority, maintain summer operations versus emergency patching.
There’s no perfect solution: emergency patching (disrupts 4,200 students’ courses and research deliverables risking academic and grant compliance), maintain operations (university continues participating in attacks creating national reputation damage), coordinate 65 independent departments (slow response during active attack). This historical scenario teaches how early internet threats exposed governance models not designed for cybersecurity—academic freedom culture created security vulnerabilities, decentralized IT prevented coordinated response, “education not security” institutional identity left universities unprepared for cyber threats.
IM Facilitation Notes
Emphasize historical context—2001 cybersecurity landscape fundamentally different: Pre-9/11 era, no DHS, no NIST cybersecurity framework, no higher education sector ISAC, universities viewed as educational institutions not cyber targets. Help players understand Code Red occurred before modern security frameworks existed—this wasn’t negligence, security field itself was immature in 2001.
Academic freedom culture creates legitimate governance tensions with security: University faculty autonomy isn’t bureaucratic dysfunction—it’s core academic value protecting research independence and intellectual freedom. Don’t let players dismiss decentralized IT as “bad management.” Academic governance deliberately distributes authority to prevent administrative overreach into scholarly activities.
Budget models in higher education create structural security challenges: Departments control own funds allocated by enrollment, central security requirements compete against faculty hiring and research support (core mission), unfunded mandates from central IT lack implementation authority. Security investment doesn’t affect grant competitiveness or accreditation metrics that departments optimize for.
Summer reduced staffing reflects academic calendar reality: Universities operate on 9-month faculty contracts, summer is genuinely lower activity period (30% student population), IT staff taking earned vacation is reasonable workforce management. Code Red timing during summer wasn’t predictable—attackers don’t coordinate with academic calendars.
Research grant compliance creates real consequences for downtime: Federal grants have legally binding deliverable schedules, missed milestones trigger funding holds affecting graduate student stipends and research operations, multi-institution collaborations depend on data repository access, grant compliance violations affect institutional reputation for future funding competitions.
This scenario teaches evolution of higher education cybersecurity: Code Red was watershed moment—universities realized they were critical infrastructure, academic sector organized information sharing capabilities (REN-ISAC founded 2003), federal government recognized higher education cyber threats. Help players understand Code Red drove institutional learning about cybersecurity importance.
Coordinate response across decentralized governance: Unlike corporate hierarchies, universities can’t simply mandate departmental compliance—academic governance requires consultation with faculty, departments have budgetary autonomy, central IT provides services but limited authority. Response requires building consensus across 65 independent units during emergency.
Hook
“It’s July 19th, 2001 at University Technology Services, and your IT department manages hundreds of Windows IIS web servers supporting 15,000 students and hundreds of academic departments. Kevin has just noticed unusual network traffic patterns - your servers are generating massive scanning activity on port 80. Within hours, academic department websites start displaying ‘HELLO! Welcome to http://www.worm.com! Hacked By Chinese!’ messages instead of course materials and research information. Unknown to your team, you’re witnessing the first major automated worm attack in internet history, and your university servers are both victims and unwilling participants in a global attack network.”
Initial Symptoms to Present:
Key Discovery Paths:
Detective Investigation Leads:
Protector System Analysis:
Tracker Network Investigation:
Communicator Stakeholder Interviews:
Mid-Scenario Pressure Points:
- Hour 1: Computer Science professor discovers his research project website defaced, questions IT security practices
- Hour 2: Network administrator reports university servers are attacking other academic institutions globally
- Hour 3: Student registration system becomes unavailable as worm consumes network bandwidth
- Hour 4: University administration demands explanation as national media reports widespread internet attack
Evolution Triggers:
- If response is delayed beyond 24 hours, university servers may participate in coordinated DDoS attacks
- If containment fails, academic reputation suffers as defaced websites remain visible publicly
- If patch deployment is inadequate, reinfection occurs as worm continues scanning campus networks
Resolution Pathways:
Technical Success Indicators:
- Manual patch deployment stops worm propagation across university IIS servers
- Network traffic monitoring identifies and isolates infected systems preventing further spread
- Academic website restoration maintains summer session operations and student services
Business Success Indicators:
- University reputation protected through rapid response and transparent communication
- Student services maintained with minimal disruption to summer registration and course access
- Academic operations continued demonstrating institutional technology resilience
Learning Success Indicators:
- Team understands automated attack evolution from manual hacking to worm-based propagation
- Participants recognize importance of patch management and security monitoring in academic environments
- Group demonstrates incident response adaptation during early internet security crisis
Common IM Facilitation Challenges:
If Manual Patch Complexity Is Underestimated:
“Kevin needs to manually download, test, and deploy MS01-033 patches to 300+ servers without automated tools. How do you coordinate manual patch deployment across distributed academic departments?”
If Internet Attack Participation Is Ignored:
“While investigating local defacements, Patricia discovers your university servers are attacking MIT, Stanford, and the White House. How does this change your response priorities?”
If Academic Culture Conflict Is Missed:
“Professor Johnson insists his research server needs public internet access without ‘restrictive’ firewalls. How do you balance academic openness with security requirements during active attack?”
Success Metrics for Session:
Understanding 2001 Technology Context
This scenario represents the actual Code Red worm attack from July 2001. Key historical elements to understand:
- Internet Infrastructure: Much smaller, primarily academic and corporate networks
- Security Awareness: Buffer overflow vulnerabilities were poorly understood outside expert circles
- Patch Management: No automated update systems - all patches applied manually
- Network Architecture: Flat networks with minimal segmentation or access controls
- Response Capabilities: No dedicated incident response teams at most organizations
Collaborative Modernization Questions for Players
Present these questions after initial investigation to guide modernization:
- “How would this attack work in today’s cloud infrastructure?”
- Guide toward: API vulnerabilities, container security, multi-tenant isolation
- “What would be the equivalent of ‘website defacement’ for modern applications?”
- Guide toward: Data manipulation, service disruption, customer-facing impact
- “How has automated scanning and exploitation evolved since 2001?”
- Guide toward: Modern vulnerability scanners, exploit kits, automated toolchains
- “What would university IT infrastructure look like today?”
- Guide toward: SaaS services, cloud providers, mobile applications, remote learning
- “How would incident response be different with modern tools and practices?”
- Guide toward: Automated detection, centralized logging, threat intelligence, coordination
Modernization Discovery Process
After historical investigation, facilitate modernization discussion:
- Technology Translation: Help players identify modern equivalents to 2001 technology
- Attack Vector Evolution: Explore how automated exploitation has advanced
- Impact Amplification: Discuss how interconnected systems change incident scope
- Response Evolution: Compare 2001 manual response to modern automated capabilities
- Scenario Adaptation: Collaboratively develop contemporary version
Learning Objectives
- Historical Perspective: Understanding how cybersecurity threats have evolved
- Technology Evolution: Recognizing parallels between historical and modern vulnerabilities
- Incident Response Development: Appreciating advances in security practices and tools
- Collaborative Learning: Working together to modernize historical threats for current relevance
IM Facilitation Notes
- Start Historical: Present the 2001 scenario authentically without modern context
- Guide Discovery: Use questions to help players discover modern parallels
- Encourage Creativity: Support player ideas for modernization even if unconventional
- Maintain Learning Focus: Emphasize what the historical context teaches about current threats
- Document Evolution: Capture player modernization ideas for future scenario development
This historical foundation approach allows teams to learn from cybersecurity history while developing skills to analyze how threats evolve and adapt to changing technology landscapes.
Template Compatibility
Quick Demo (35-40 min)
- Rounds: 1
- Actions per Player: 1
- Investigation: Guided
- Response: Pre-defined
- Focus: Use the “Hook” and “Initial Symptoms” to quickly establish 2001 university crisis. Present the “Guided Investigation Clues” at 5-minute intervals. Offer the “Pre-Defined Response Options” for the team to choose from. Quick debrief should focus on recognizing first automated worm attack and manual patch management challenges.
Lunch & Learn (75-90 min)
- Rounds: 2
- Actions per Player: 2
- Investigation: Guided
- Response: Pre-defined
- Focus: This template allows for deeper exploration of early internet security challenges. Use the full set of NPCs to create realistic academic pressure and manual response limitations. The two rounds allow worm spread across campus, raising stakes. Debrief can explore balance between academic openness and security, plus brief modernization discussion.
Full Game (120-140 min)
- Rounds: 3
- Actions per Player: 2
- Investigation: Open
- Response: Creative
- Focus: Players have freedom to investigate using the “Key Discovery Paths” as IM guidance. They must develop response strategies balancing academic operations, manual patch deployment, network security, and internet attack participation responsibility. The three rounds allow for full narrative arc including historical context and comprehensive modernization discussion exploring how 2001 worm evolved into contemporary threats.
Advanced Challenge (150-170 min)
- Rounds: 3
- Actions per Player: 2
- Investigation: Open
- Response: Creative
- Complexity: Add red herrings (e.g., legitimate academic research traffic causing false positives). Make containment ambiguous, requiring players to justify manual patch decisions with incomplete vulnerability information. Remove access to reference materials to test knowledge recall of worm behavior. Include deep modernization discussion comparing 2001 manual response to contemporary automated capabilities.
Quick Demo Materials (35-40 min)
Guided Investigation Clues
Clue 1 (Minute 5): “Web server forensics reveal Code Red worm exploiting IIS buffer overflow vulnerability (idq.dll) in University Technology Services servers during July 2001. Network analysis shows significant increase in outbound port 80 scanning traffic from infected IIS web servers targeting random internet addresses. Academic department websites display ‘HELLO! Welcome to http://www.worm.com! Hacked By Chinese!’ defacement messages.”
Clue 2 (Minute 10): “Log analysis shows automated exploitation without human intervention - this is the first major self-propagating worm attack in internet history. Timeline indicates simultaneous infection of multiple campus servers through unpatched IIS systems. Security assessment reveals university delayed MS01-033 patch deployment due to concerns about disrupting summer academic operations.”
Clue 3 (Minute 15): “External security community reports university servers participating in global scanning activity and attacking MIT, Stanford, and other academic institutions. Student registration systems becoming unavailable as worm consumes network bandwidth. Professor Johnson’s research server defaced, demanding explanations about university security practices while insisting on maintaining open internet access without firewalls.”
Pre-Defined Response Options
Option A: Manual Patch Deployment & Server Restoration
- Action: Download and manually apply Microsoft Security Bulletin MS01-033 patch to all 300+ affected IIS servers, coordinate physical server access across academic departments, reboot systems to clear memory-resident worm, restore defaced websites from backups.
- Pros: Directly addresses IIS indexing service vulnerability preventing reinfection; demonstrates responsible patch management establishing security foundation for future threats.
- Cons: Manual patch deployment extremely time-consuming requiring days for distributed academic infrastructure; server reboots disrupt summer academic operations; coordination complexity across autonomous departments.
- Type Effectiveness: Super effective against Worm type malmons like Code Red; memory-only worm eliminated through reboot after patching prevents reinfection.
Option B: Emergency Firewall Blocking & Traffic Control
- Action: Configure perimeter firewalls to block all outbound port 80 traffic from IIS servers except known legitimate destinations, implement emergency traffic filtering preventing worm propagation, isolate infected systems while maintaining critical academic services.
- Pros: Immediately stops worm spread and prevents university participation in global attacks; faster than manual patching enabling rapid containment.
- Cons: May disrupt legitimate academic web services requiring careful whitelist configuration; doesn’t address underlying IIS vulnerability enabling reinfection after firewall changes; manual firewall rule management across flat academic network.
- Type Effectiveness: Moderately effective against Worm threats; prevents propagation but doesn’t eliminate worm or fix vulnerability; temporary containment requiring subsequent patching.
Option C: IIS Indexing Service Disable & Temporary Mitigation
- Action: Manually disable IIS Indexing Service on all campus web servers eliminating vulnerable component, maintain basic web functionality without search features, coordinate emergency configuration changes across academic departments.
- Pros: Immediately stops attack vector without full patch deployment; faster workaround enabling rapid response; maintains most academic web services during remediation.
- Cons: Disables search functionality affecting some academic applications; requires manual configuration on each server; temporary workaround still requiring eventual patching.
- Type Effectiveness: Partially effective against Worm malmon type; removes attack surface but doesn’t eliminate existing infections; requires combination with server reboots for complete remediation.
Lunch & Learn Materials (75-90 min, 2 rounds)
Round 1: Discovery & Identification (30-35 min)
Investigation Clues:
- Clue 1 (Minute 5): Network Administrator David Kumar reports that faculty are seeing defacement messages on departmental websites. “The Computer Science homepage now says ‘HELLO! Welcome to http://www.worm.com! Hacked By Chinese!’ - and it’s spreading to other departments.”
- Clue 2 (Minute 10): Server forensics reveal exploitation of Microsoft IIS Indexing Service buffer overflow (MS01-033). The attack uses a malformed HTTP GET request that’s spreading automatically between Windows 2000 IIS servers without human intervention - it’s a worm.
- Clue 3 (Minute 15): Network monitoring shows 300+ campus IIS servers generating massive scanning traffic to random internet IP addresses. The university is participating in a global internet-wide attack that’s overwhelming networks worldwide.
- Clue 4 (Minute 20): IT Director Michael Chen reveals that Microsoft released security bulletin MS01-033 two weeks ago, but patching was delayed during summer semester to avoid disrupting faculty research web servers. “We couldn’t coordinate patch deployment across 50 autonomous departments during active research projects.”
Response Options:
- Option A: Emergency Server Reboot - Immediately reboot all affected IIS servers to clear the memory-resident worm, restore defaced websites from tape backups, delay vulnerability patching until coordinated maintenance window.
- Pros: Fastest path to website restoration; clears active worm infections; minimal summer semester disruption.
- Cons: Doesn’t patch the IIS vulnerability; servers will be reinfected within hours from internet scanning; requires physical access to 300+ distributed servers.
- Type Effectiveness: Partially effective - temporarily eliminates worm but leaves systems vulnerable to immediate reinfection.
- Option B: Firewall Emergency Rules - Configure border firewalls to block all outbound port 80 traffic from academic network except approved destinations, stop university’s participation in global attacks.
- Pros: Immediately stops university from attacking internet; faster than manual server patching; protects university reputation.
- Cons: May break legitimate faculty research requiring outbound web access; doesn’t fix underlying IIS vulnerability; requires careful whitelist management.
- Type Effectiveness: Moderately effective - contains propagation but doesn’t eliminate worm or vulnerability.
- Option C: IIS Indexing Service Disable - Manually disable IIS Indexing Service on all campus web servers to remove attack vector, coordinate across academic departments for rapid deployment.
- Pros: Removes vulnerability without full patching; faster than MS01-033 deployment; maintains most web functionality.
- Cons: Disables search features on academic sites; requires manual server-by-server configuration; temporary workaround still needs patching eventually.
- Type Effectiveness: Partially effective - removes attack surface but doesn’t clear existing infections; requires reboot combo.
Round 2: Scope Assessment & Response (30-35 min)
Investigation Clues:
- Clue 5 (Minute 30): If Option A (reboot only) was chosen: Within 90 minutes, campus servers are reinfected from internet scanning. eEye Digital Security reports university is part of 359,000 compromised systems globally. “We’re back to attacking the internet again.”
- Clue 5 (Minute 30): If Option B or C was chosen: Faculty researchers report broken web applications due to firewall restrictions or missing search functionality. “Our genomics research portal needs to query external databases - the firewall is blocking critical research.”
- Clue 6 (Minute 40): CERT/CC advisory reveals Code Red will trigger mass DDoS attack against www.whitehouse.gov on July 19th. University’s 300+ infected servers will participate in coordinated attack against U.S. government website unless patched.
- Clue 7 (Minute 50): University President receives call from federal agencies about academic institution participation in attacks. “NSA and FBI are contacting universities nationwide. We need to demonstrate responsible internet citizenship.”
- Clue 8 (Minute 55): IT analysis reveals that manual MS01-033 patch deployment to 300+ servers across 50 autonomous departments will require 5-7 days of coordinated effort during summer research season. July 19th DDoS trigger is 4 days away.
Response Options:
- Option A: Emergency Coordinated Patching - Mobilize all IT staff for 24/7 manual MS01-033 patch deployment across entire campus, coordinate with academic departments for emergency server access, reboot all systems after patching to clear worm.
- Pros: Completely eliminates vulnerability; prevents university participation in July 19th DDoS; demonstrates academic cybersecurity leadership to federal agencies.
- Cons: Requires extensive disruption to summer research; 24/7 IT staff mobilization; coordination complexity across autonomous academic departments.
- Type Effectiveness: Super effective against Worm type - eliminates vulnerability and infection preventing reinfection and DDoS participation.
- Option B: Phased Departmental Patching - Prioritize patching of high-visibility department servers (main websites, student services), maintain containment measures (firewall/indexing disable) for remaining systems, complete full patching post-DDoS date.
- Pros: Balances security with research continuity; protects highest-visibility systems; reduces coordination burden.
- Cons: University still participates in DDoS with some servers; differential treatment creates vulnerability gaps; extended remediation timeline.
- Type Effectiveness: Moderately effective - progressive improvement but partial DDoS participation remains.
- Option C: External Academic Consortium Support - Coordinate with Internet2 and other research universities for shared response, request federal assistance through EDUCAUSE, collaborate on academic sector patching strategies and technical resources.
- Pros: Leverages academic community resources; federal expertise accelerates response; builds higher education cybersecurity collaboration.
- Cons: Coordination complexity across institutions; potential delays in external resource availability; admission that single institution lacks sufficient capability.
- Type Effectiveness: Moderately effective - improves response quality through collaboration but extends timeline.
Round Transition Narrative
After Round 1 → Round 2:
The team’s initial response determines whether the university quickly returns to vulnerable operation (reboot approach) or maintains containment with research impact (firewall/indexing disable). Either way, the situation escalates dramatically when CERT/CC reveals that Code Red will trigger a coordinated DDoS attack against www.whitehouse.gov on July 19th - just days away. Federal agencies are contacting universities nationwide about their participation in this upcoming attack on U.S. government infrastructure. The team must now balance comprehensive security remediation with summer research continuity, while facing the reality that manual patch deployment to 300+ distributed servers may not be completable before the DDoS trigger date. The incident transforms from a local website defacement problem into a national security issue requiring inter-agency coordination and academic community collaboration.
Debrief Focus:
- Recognition of first major automated worm vs manual hacking
- Balance between academic openness and security requirements
- Manual patch management challenges in distributed infrastructure
- Brief discussion of modern equivalents (ransomworms, IoT botnets)
Full Game Materials (120-140 min, 3 rounds)
Round 1: Initial Discovery & Assessment (35-40 min)
Opening Scenario:
Dr. Patricia Williams enters the Network Operations Center on a summer Friday afternoon to find Kevin Zhang staring at network monitoring dashboards with obvious concern. “We’re seeing massive spikes in outbound traffic on port 80,” Kevin says. “Multiple servers are scanning random internet addresses - but nobody’s running vulnerability assessments today.”
Within minutes, phone calls start flooding in. The Computer Science department website displays “HELLO! Welcome to http://www.worm.com! Hacked By Chinese!” instead of summer course information. The Engineering school’s research project pages show the same defacement. Student Services reports their online registration system is experiencing connectivity issues.
Patricia quickly assembles the available IT staff. “It’s July 19th, 2001. We’re managing hundreds of Windows IIS servers across 50 autonomous academic departments. And something is very wrong.”
Team Action: Each player takes 2 actions to investigate the incident using their role’s capabilities. The IM should track what the team discovers based on their investigation choices.
Investigation Discoveries (based on role and approach):
Detective-focused investigations:
- IIS web server logs reveal malformed HTTP GET requests exploiting buffer overflow in indexing service (idq.dll)
- Forensic analysis shows identical exploit code across multiple infected servers - automated rather than manual
- Timeline reconstruction indicates near-simultaneous compromise of campus infrastructure within hours
- Memory analysis reveals worm code running entirely in RAM without disk files
Protector-focused investigations:
- Vulnerability assessment shows unpatched Microsoft IIS Indexing Service buffer overflow (MS01-033)
- Security review discovers patch released by Microsoft two weeks ago but not yet deployed
- Network architecture analysis reveals flat campus network enabling rapid worm propagation
- Server configuration audit shows most IIS systems running with default settings and full internet exposure
Tracker-focused investigations:
- Network flow analysis shows outbound scanning traffic to random Class A, B, and C internet addresses
- External communication logs reveal university servers are attacking MIT, Stanford, Berkeley, and other academic institutions
- Internet traffic patterns indicate participation in global scanning activity affecting hundreds of thousands of systems
- CERT/CC security advisories confirm university is part of worldwide Code Red worm outbreak
Communicator-focused investigations:
- Faculty interviews reveal growing frustration with defaced research websites and lost academic credibility
- Student Services reports increasing complaints about unavailable online registration and course materials
- University administration demands status updates as national media begins reporting internet-wide attack
- Academic peer institutions share similar experiences through EDUCAUSE emergency communications
Key NPCs and Interactions:
Dr. Patricia Williams (IT Director):
- Former Bell Labs engineer with deep networking knowledge but limited worm attack experience
- Balancing security response with academic culture valuing openness and minimal restrictions
- Under pressure from university administration to explain security failures
- Available for technical guidance: “At Bell Labs, we dealt with telephone network attacks - but this automated internet worm is unprecedented.”
Kevin Zhang (Network Administrator):
- Recent Computer Science graduate experiencing first major security incident
- Discovering that automated attacks spread faster than manual response capabilities
- Struggling with manual patch deployment across distributed academic infrastructure
- Reality check: “I’m supposed to manually patch 300+ servers across 50 departments that won’t even return my voicemails during summer research season?”
Professor Michael Johnson (Computer Science Faculty):
- Research web server was defaced, questioning IT security competency
- Insisting on maintaining open internet access for academic research without firewall restrictions
- Represents academic culture prioritizing accessibility over security
- Conflict point: “I need my genomics server to query external databases freely - your ‘security measures’ are blocking critical research!”
Lisa Rodriguez (Student Services Manager):
- Fielding increasing student complaints about unavailable online services
- Summer registration deadline approaching with systems unreliable
- Non-technical perspective on IT security failures
- Pressure point: “Students are calling asking if they can register for fall classes - what am I supposed to tell them?”
Round 1 Pressure Events:
These occur during the 35-40 minute investigation period, building tension:
- 15 minutes in: Lisa Rodriguez calls reporting that student online registration system is experiencing severe slowdowns. “The fall registration deadline is next week - we can’t have system outages.”
- 25 minutes in: External CERT/CC contacts university reporting that campus servers are attacking critical internet infrastructure. “Your institution is participating in attacks against government and academic networks worldwide.”
- 30 minutes in: Professor Johnson storms into IT demanding to know why his research server is defaced. “This makes our entire Computer Science department look incompetent! How did this happen?”
Round 1 Conclusion:
After investigations, the team should understand they’re facing the first major automated worm attack in internet history, affecting university infrastructure through unpatched IIS buffer overflow vulnerability, with campus servers now participating in global internet attacks. Patricia asks: “Based on what you’ve discovered, what’s your initial response strategy?”
Round 2: Response & Escalation (35-40 min)
Situation Development:
The team’s initial response strategy meets immediate reality challenges. If they chose to simply reboot servers, the worm reinfects within hours from continued internet scanning. If they implemented firewall blocking, faculty research requiring outbound web access breaks. If they disabled IIS Indexing Service, search functionality disappears from academic websites.
More critically, new intelligence emerges that transforms the incident from local university problem to national security concern.
Opening:
CERT/CC issues emergency advisory: Code Red worm contains hardcoded DDoS trigger date of July 19th targeting www.whitehouse.gov. Every infected system worldwide - including university’s 300+ compromised servers - will launch coordinated attack against U.S. government website at predetermined time. Federal agencies are contacting academic institutions about their participation.
Patricia receives call from NSA: “We’re tracking internet-wide attack preparations. Your university has significant infected infrastructure. What’s your remediation timeline?”
Kevin reports sobering analysis: Manual MS01-033 patch deployment to 300+ servers distributed across 50 autonomous academic departments during active summer research season will require 5-7 days of coordinated effort. The DDoS trigger date is 4 days away.
Team Action: Each player takes 2 actions to develop and implement response strategy, considering:
- Technical remediation (patch deployment, containment, recovery)
- Academic continuity (summer research, student services, faculty relations)
- Federal coordination (NSA/FBI expectations, internet citizenship responsibility)
- Resource constraints (manual patch deployment, distributed infrastructure, timeline pressure)
Response Options and Consequences:
Emergency 24/7 Coordinated Patching:
- Implementation: Mobilize all IT staff for around-the-clock manual patch deployment, coordinate emergency server access with all 50 academic departments, prioritize critical systems first but aim for complete coverage before July 19th DDoS date
- Immediate Effects: Requires significant disruption to summer research as servers need rebooting, extensive coordination overhead, 24/7 staff mobilization with overtime costs
- Outcome: Successfully patches 80-90% of servers before DDoS trigger, prevents majority of university participation in White House attack, demonstrates academic cybersecurity leadership to federal agencies
- Learning: Shows importance of emergency response mobilization and inter-departmental coordination under crisis timeline
Phased Departmental Approach:
- Implementation: Prioritize patching high-visibility systems (main websites, student services, critical research) first, maintain containment measures for remaining infrastructure, complete full remediation after DDoS date passes
- Immediate Effects: Reduces research disruption through selective patching, balances security with academic continuity, manages coordination complexity
- Outcome: University still participates in DDoS with 30-40% of servers, creates differential security posture with some departments protected and others vulnerable, extended remediation timeline
- Learning: Demonstrates tradeoffs between comprehensive security and operational continuity, risk of partial remediation
Academic Consortium Collaboration:
- Implementation: Coordinate with Internet2 and peer research universities for shared response resources, request federal technical assistance through EDUCAUSE, pool IT staff across institutions for collective patch deployment support
- Immediate Effects: Builds higher education cybersecurity community collaboration, accesses federal expertise and resources, admits individual institution limitations
- Outcome: Improves patch deployment efficiency through shared resources, establishes academic security coordination precedent, extends response timeline through coordination overhead
- Learning: Shows value of inter-institutional cooperation and federal partnership in major incidents
Network Isolation Strategy:
- Implementation: Completely isolate campus academic network from internet until patching complete, establish temporary remote access through secure gateway for critical research needs, accept research disruption for comprehensive security
- Immediate Effects: Immediately stops worm propagation and prevents DDoS participation, causes significant summer research disruption, requires substantial faculty communication and justification
- Outcome: Guarantees zero university participation in White House attack, creates academic community backlash against restrictive security measures, demonstrates absolute prioritization of security over research continuity
- Learning: Illustrates extreme containment approach and resulting academic culture conflicts
Hybrid Technical + Political Strategy:
- Implementation: Deploy maximum feasible patching effort while simultaneously engaging with federal agencies to provide real-time remediation status, coordinate with CERT/CC on internet service provider level blocking as backup, maintain transparent communication with university administration
- Immediate Effects: Balances technical remediation with external stakeholder management, demonstrates good-faith effort even if incomplete, builds federal relationships
- Outcome: Achieves 70-80% patch coverage with federal awareness of ongoing effort, potential ISP-level containment as fallback, preserves academic reputation through transparency
- Learning: Shows integration of technical response with strategic communication and external coordination
Round 2 Pressure Events:
Building tension during response implementation:
- 15 minutes in: Professor Johnson escalates to Dean of Engineering complaining about IT security restrictions blocking research. Dean calls Patricia demanding explanation.
- 25 minutes in: Student newspaper runs story about university cybersecurity failures and participation in global internet attack. Public affairs office requests detailed statement.
- 30 minutes in: Federal agencies provide updated intelligence showing Code Red variant may have additional capabilities beyond current understanding. Uncertainty increases.
- 35 minutes in: Kevin reports that 3 departments are refusing emergency server access during active research projects. “Computer Science, Engineering, and Physics won’t grant access until after their critical experiments complete.”
Round 2 Conclusion:
Regardless of chosen approach, the team should be managing complex tradeoffs between security, research continuity, federal expectations, and resource constraints. The incident has grown from technical problem to organizational crisis requiring leadership decisions about priorities and acceptable risks. Patricia says: “We need final decisions - July 19th is approaching and we’ll be judged on our choices.”
Round 3: Resolution & Modernization (35-40 min)
Final Situation:
July 19th, 2001 arrives. The Code Red worm’s hardcoded DDoS trigger activates worldwide. Depending on the team’s Round 2 response strategy:
If comprehensive patching achieved (80%+ coverage): University infrastructure is largely protected. Only a handful of resistant departments’ servers participate in White House attack. Federal agencies acknowledge university’s exceptional response effort. Local news runs positive story about academic cybersecurity leadership. Patricia receives commendation from university president.
However, 5-7 days of intensive patch deployment revealed serious infrastructure management gaps. The incident demonstrated that manual security operations don’t scale across distributed academic environments. Summer research was significantly disrupted. Faculty trust in IT requires rebuilding.
If partial/phased approach taken (40-70% coverage): Significant portion of university servers participate in DDoS attack. Federal investigation confirms university made good-faith effort but lacked capability for complete remediation. Mixed public perception - responsible attempt but incomplete execution. Some academic departments remained vulnerable throughout.
The experience shows limitations of resource-constrained response and organizational coordination challenges. University administration questions IT capability and funding. Academic community debates appropriate balance between openness and security.
If isolation/extreme measures used: University successfully avoided all DDoS participation but caused major summer research disruption. Faculty backlash against “excessive” security restrictions. Academic culture conflict between IT security and research freedom intensifies. Federal agencies note successful containment but question sustainability of approach.
The incident creates lasting tension between security and academic values, requiring careful relationship rebuilding and policy development.
Team Action - Part 1: Immediate Aftermath (15-20 min):
Each player takes 1-2 actions to: - Complete any remaining technical remediation - Address stakeholder concerns (faculty, students, administration, federal agencies) - Document lessons learned from 2001 worm response - Assess organizational changes needed for future security
Team Action - Part 2: Collaborative Modernization (15-20 min):
The IM facilitates group discussion to modernize this 2001 historical scenario to contemporary threat landscape:
Facilitation Questions:
- “How would this attack work in today’s cloud infrastructure?”
- Guide toward: Container vulnerabilities, serverless security, multi-cloud complexity, API exploitation, infrastructure-as-code risks
- “What would be the modern equivalent of ‘website defacement’?”
- Guide toward: Data manipulation, service disruption, customer-facing application compromise, cloud resource hijacking for cryptomining
- “How has automated scanning and exploitation evolved since 2001?”
- Guide toward: Shodan and internet scanning platforms, automated exploit frameworks, vulnerability disclosure timelines, zero-day markets, nation-state capabilities
- “What would university IT infrastructure look like today?”
- Guide toward: Cloud services (AWS/Azure/GCP for research computing), SaaS applications (Canvas, Google Workspace), mobile applications, remote learning platforms, IoT research devices, bring-your-own-device
- “How would incident response be different with modern tools and practices?”
- Guide toward: Automated patching and vulnerability management, centralized logging and SIEM, threat intelligence feeds, incident response platforms, cloud security posture management, academic sector ISACs
- “What would the equivalent ‘DDoS trigger’ scenario be in contemporary context?”
- Guide toward: Ransomworm propagation, cloud resource cryptocurrency mining, AI training resource theft, research data exfiltration, supply chain compromise through academic software repositories
Collaborative Modernization Output:
Team works together to develop contemporary version of Code Red scenario: - Modern university infrastructure context (cloud, SaaS, mobile, IoT) - Updated attack vector (container vulnerability, API exploitation, supply chain) - Contemporary pressure points (research data integrity, cloud cost explosion, compliance) - Current response capabilities (automated tools, threat intelligence, coordination)
Victory Conditions Assessment:
Technical Success:
Business Success:
Learning Success:
Final Debrief Topics:
Historical Context Lessons:
- Code Red (July 2001) represented paradigm shift from manual hacking to automated worm propagation
- Buffer overflow vulnerabilities were poorly understood outside expert security community
- Manual patch management and lack of automated tools created significant response challenges
- Academic culture valuing openness conflicted with emerging security requirements
- Federal government concern about critical infrastructure protection was intensifying
Modern Parallels:
- IoT botnets (Mirai) follow similar automated exploitation and DDoS patterns
- Ransomworms (WannaCry, NotPetya) combine worm propagation with business impact
- Cloud misconfigurations enable automated scanning and exploitation
- Academic research infrastructure remains attractive target for resource theft
- Coordination between education sector and federal cybersecurity agencies has matured
Incident Response Evolution:
- 2001: Manual patching, limited coordination, reactive response, resource constraints
- 2025: Automated vulnerability management, threat intelligence, proactive hunting, orchestrated response
- Persistent challenges: Distributed infrastructure, organizational coordination, resource prioritization
- New challenges: Cloud complexity, supply chain risks, nation-state threats, AI/ML attack surfaces
Organizational Lessons:
- Security cannot be deprioritized during busy operational periods (summer research)
- Patch management must be systematic rather than ad-hoc
- Academic culture requires security approaches respecting research mission
- Incident response requires organizational support beyond IT capabilities
- Federal partnership and sector coordination are force multipliers
Round 3 Conclusion:
Patricia addresses the team: “We’ve navigated the first major automated worm attack in internet history. More importantly, we’ve learned how cybersecurity threats evolve and how our response capabilities must advance to meet them. The Code Red worm of 2001 taught the entire internet community that automated attacks change everything - and those lessons still guide us today.”
Advanced Challenge Materials (150-170 min, 3 rounds)
Additional Complexity Layers
For experienced teams seeking maximum challenge, add these complexity elements:
1. Incomplete Information & Uncertainty
Initial Phase Ambiguities:
- Microsoft Security Bulletin MS01-033 patch deployment guidance is unclear about production environment impacts
- Early CERT/CC advisories contain conflicting information about worm capabilities and propagation mechanisms
- Network monitoring tools show suspicious traffic but can’t definitively distinguish worm scanning from legitimate academic research activities
- Forensic analysis reveals worm code but reverse engineering takes time to understand full functionality
Implementation: Remove or delay access to clear “Guided Investigation Clues.” Make players work with ambiguous early reporting, conflicting intelligence, and incomplete technical understanding. They must make decisions with uncertainty about patch impacts, worm capabilities, and appropriate response scope.
2. Red Herrings & False Leads
Misleading Evidence:
- Legitimate Research Traffic: Computer Science department is running authorized vulnerability scanner for research project, creating false positives in network monitoring alongside actual worm traffic
- Unrelated Website Issues: Physics department website was legitimately being redesigned during incident timeframe - defacement reports may be confused with planned downtime
- Administrative Access Logs: Routine system administrator remote access from home appears suspicious in log analysis without proper context
- Faculty Complaints: Engineering professor complains about “computer acting strange” but investigation reveals unrelated hardware failure, consuming investigation time
Implementation: Seed investigation with 2-3 red herrings that consume player time and actions. Require careful analysis to distinguish legitimate activities from actual worm indicators. Penalize hasty conclusions with false positive responses.
3. Resource Constraints & Tough Choices
Limited IT Staff:
- Only 3 IT staff available during summer Friday afternoon when attack detected
- Weekend coverage minimal - must choose between calling in vacation staff or delaying response
- Manual patch deployment to 300+ servers exceeds available staff capacity
- Must prioritize which systems to remediate first with insufficient resources for complete coverage
Technical Limitations:
- No automated patch deployment tools in 2001 - every server requires manual access
- Tape backup restoration for defaced websites takes 6-8 hours per server
- Network monitoring tools primitive compared to modern capabilities - limited visibility
- No centralized logging or SIEM - must manually access each server for forensics
Budget Pressures:
- Emergency weekend overtime will exhaust quarterly IT budget
- University administration questions security spending after incident occurs
- Requesting additional resources requires justification to non-technical leadership
- Faculty departments bill IT for research disruption during emergency patch deployment
Implementation: Enforce realistic resource constraints. Make players explicitly choose which systems to protect with limited staff/time/budget. Require justification for resource requests. Create tension between comprehensive security and practical limitations.
4. Organizational Politics & Conflicts
Academic Culture Resistance:
- Computer Science Department: “We’re security researchers - we don’t need IT telling us how to secure our systems. This is embarrassing.”
- Research Computing: “Our grant-funded high-performance computing cluster can’t be taken offline during active NSF-funded research - that’s $2M in jeopardy.”
- Faculty Senate: “This heavy-handed security response threatens academic freedom and open research principles that define our university.”
Administrative Conflicts:
- University President: “How did this happen and who’s responsible? The Board of Trustees is demanding accountability.”
- Public Affairs: “Media is running stories about our security failures - we need messaging that protects institutional reputation.”
- General Counsel: “Federal agencies investigating our participation in attacks creates legal liability - what’s our exposure?”
Departmental Autonomy:
- Multiple departments refuse IT emergency access to their servers during active research
- Some departments have their own IT staff who don’t report to central IT
- Academic culture values departmental autonomy over centralized security control
- Political relationships matter - forcing compliance has career consequences for IT leadership
Implementation: Introduce 2-3 explicit organizational conflicts requiring non-technical resolution. Make players navigate academic politics, justify decisions to non-technical stakeholders, and manage competing organizational priorities. Success requires both technical competency and organizational leadership.
5. Cascading Complications
Round 1 Complications:
- Initial server reboots to clear worm cause research data loss for faculty who didn’t follow backup procedures
- Emergency firewall rules break legitimate academic collaborations with peer institutions
- Media reports create parent concerns about student data security despite no actual student data compromise
Round 2 Complications:
- Patch deployment causes unexpected compatibility issues with custom academic applications
- Federal investigation creates additional reporting requirements consuming IT staff time
- Student newspaper investigation reveals that IT delayed patching due to operational concerns - public criticism intensifies
Round 3 Complications:
- Some patched servers experience stability issues requiring troubleshooting during critical remediation window
- Academic peer institutions share intelligence about Code Red variant with additional capabilities not yet seen at your university
- University administration announces mandatory security review with external consultants - IT leadership credibility questioned
Implementation: Introduce 1-2 unexpected complications per round that weren’t predictable from initial analysis. Require adaptive response as situation evolves beyond initial scope. Test ability to manage cascading effects and maintain strategic focus despite tactical distractions.
Advanced Challenge Round Structure
Round 1: Discovery Under Uncertainty (45-50 min)
Players must investigate Code Red worm with: - Limited/conflicting early intelligence about worm capabilities - Red herrings mixed with genuine attack indicators - Ambiguous network traffic requiring careful analysis - Pressure to respond quickly despite incomplete information
Success requires: Distinguishing signal from noise, making reasoned judgments with uncertainty, avoiding false positive responses while not missing actual threats.
Round 2: Response Under Constraints (45-50 min)
Players must develop response strategy while managing: - Insufficient IT staff for comprehensive manual patch deployment - Academic departments refusing emergency access during research - Federal pressure for rapid remediation before DDoS trigger date - Budget limitations and organizational politics
Success requires: Strategic prioritization, stakeholder management, creative resource utilization, explicit tradeoff decision-making with justification.
Round 3: Resolution & Modernization Under Complexity (45-50 min)
Players must complete incident response while handling: - Cascading complications from earlier decisions - Organizational accountability and external review - Incomplete remediation requiring risk acceptance - Collaborative modernization discussion translating lessons to contemporary context
Success requires: Adaptive problem-solving, organizational leadership, learning extraction despite imperfect outcomes, strategic thinking about threat evolution.
Advanced Challenge Debriefing
Focus Areas:
1. Decision-Making Under Uncertainty:
- How did the team handle conflicting information and ambiguous evidence?
- What frameworks did they use to make decisions without complete information?
- Were they able to avoid analysis paralysis despite uncertainty?
- How did they distinguish between reasonable caution and excessive hesitation?
2. Resource Allocation & Prioritization:
- How did the team prioritize limited IT staff across 300+ vulnerable servers?
- What criteria did they use to make triage decisions?
- Were they able to explicitly acknowledge and justify tradeoffs?
- How did they balance comprehensive security with practical constraints?
3. Organizational Leadership:
- How effectively did the team navigate academic culture and departmental politics?
- Were they able to communicate security needs to non-technical stakeholders?
- How did they handle conflicts between security requirements and research continuity?
- What strategies worked for managing organizational resistance?
4. Adaptive Response:
- How well did the team respond to unexpected complications and cascading effects?
- Were they able to adjust strategy as situation evolved beyond initial scope?
- How did they maintain strategic focus despite tactical distractions?
- What did they learn about incident response resilience?
5. Historical Learning & Modernization:
- What specific lessons from 2001 Code Red apply to contemporary threats?
- How have automated attacks evolved from simple worms to modern sophisticated campaigns?
- What parallels exist between historical buffer overflow exploitation and modern vulnerability landscape?
- How should incident response practices evolve to address emerging threats while learning from history?
Victory Conditions (Advanced Challenge):