Structured Debrief Protocol
The debrief is where Malware & Monsters sessions become more than games. Players leave a session having experienced incident response under pressure – the debrief is what converts that experience into lasting behavioral change.
This protocol gives you a structured methodology for the learning extraction phase of any M&M session. It works alongside the Closing Script, which handles the emotional check-in, MalDex documentation, and community wrap-up that follow the debrief.
This protocol = structured reflection and forward planning (the learning core).
Closing Script = emotional check-in, MalDex entry, community connection, farewell.
Run this protocol first, then transition to the closing script.
Why This Protocol Works
This protocol isn’t improvised – it’s built on decades of research into how people learn from experience. Understanding the theory helps you adapt the protocol on the fly while preserving what makes it effective.
The core problem: Experience alone does not produce learning. A team can play through an entire M&M scenario, make critical decisions under pressure, and walk away having learned nothing – because learning requires structured reflection on the experience, not just the experience itself.
Three frameworks converge on the same structure:
| Framework | Key Insight | How It Shapes This Protocol |
|---|---|---|
| Kolb’s Experiential Learning Cycle (1984) | Learning is a four-stage cycle: experience → reflect → conceptualize → experiment. Skipping any stage produces incomplete learning. | The protocol’s five phases map directly: gameplay is the experience; Event Recall and Role Reflection are reflection; Gap Analysis is conceptualization; Real-World Transfer and Commitment are experimentation. |
| Gibbs’ Reflective Cycle (1988) | Emotions must be surfaced before analytical evaluation begins. Feelings that aren’t acknowledged (frustration, embarrassment, feeling exposed) block cognitive learning. | Phase 2 (Role Reflection) deliberately creates space for individual perspective and emotional processing before Phase 3 (Gap Analysis) introduces evaluation. |
| The After-Action Review (US Army, 1981) | Four questions – “What was supposed to happen? What actually happened? Why the difference? What do we sustain or improve?” – with simplicity as a design constraint. Meta-analyses show AARs produce medium-to-large effects on training retention (effect size d = 0.67-0.79 across thousands of participants). | The full format’s Phase 1 and Phase 3 are direct adaptations of the AAR’s four questions, contextualized for M&M’s game mechanics. |
On written commitments: A meta-analysis of 94 studies (Gollwitzer & Sheeran, 2006) found that written implementation intentions – specific “when [situation], I will [action]” plans – produce a medium-to-large effect (d = 0.65) on goal attainment. Verbal intentions alone lose roughly half their effect before reaching behavior. This is why Phase 5 requires writing, not just discussion.
Timing Guide
When to Begin Winding Down
The debrief requires protected time – do not let gameplay consume it. Use these signals to begin transitioning:
| Session Length | Signal to Wind Down | Debrief Format |
|---|---|---|
| 60 minutes | At the 40-minute mark, begin final round | Short (5 min) |
| 90 minutes | At the 60-minute mark, begin final round | Full (20-30 min) |
| 120 minutes | At the 85-minute mark, begin final round | Full (20-30 min) |
| Half-day workshop | At 80% elapsed, begin final round | Full + extended discussion |
Transition phrase: “Team, we’re entering the final phase of this incident. You have about [X] minutes to wrap up your response before we step out of character and debrief.”
If you’re running behind, shorten the final gameplay round rather than cutting the debrief. A session without a debrief is an experience without learning transfer.
Short Format (5 Minutes)
Use this when time is tight. The short format focuses on the irreducible core: name the moment, name the gap, name the action.
Setup: Have each player grab something to write with (phone notes app, index card, or the Commitment Card at the end of this guide).
Step 1: Name the Moment (1 min)
“Think back through the session. What was the one decision point where you felt most uncertain – where you weren’t sure what to do next?”
- Go around the table. Each player names their moment in one sentence.
- Do not discuss or analyze yet – just collect the moments.
Step 2: Name the Gap (1.5 min)
“Now – knowing what you know after seeing how the scenario played out – what would the ideal response have been at that moment?”
- Each player states what they’d do differently.
- The gap between “what I did” and “what I should have done” is the learning.
Step 3: Name the Action (2.5 min)
“Based on that gap, what’s one specific thing you’ll do differently in your real work this week? Not ‘communicate better’ – something concrete. A tool you’ll set up, a process you’ll document, a conversation you’ll have.”
- Each player writes their commitment on their card or notes.
- Read commitments aloud to the group – public commitment increases follow-through.
A debrief where everyone talks brilliantly but leaves without a written action produces zero behavioral change. Research consistently shows that written commitments outperform verbal ones – and the most effective format is the implementation intention: “When [situation], I will [action].” Across 94 studies, this specific format produced significantly better follow-through than vague written goals. Prioritize the written commitment over extended discussion.
Full Format (20-30 Minutes)
The full format uses the After-Action Review (AAR) methodology adapted for M&M’s game structure. Five phases, each building on the last.
Phase 1: Event Recall (5 min)
Goal: Reconstruct what happened before analyzing it.
“Let’s walk through what just happened. I’ll guide us through the key moments.”
Walk through the session chronologically, hitting these anchor points:
- Round 1: “What was your first indicator that something was wrong? What did [Malmon name] do that tipped you off?”
- Round 2: “Where did things escalate? What new information changed your approach?”
- Round 3: “How did you decide on your final response? What trade-offs did you make?”
- Network Security Status: “We started at [X] and ended at [Y]. What were the turning points that moved the needle?”
Facilitator note: Keep this factual, not evaluative. “What happened?” not “What went wrong?” You’re building a shared understanding of the timeline before analyzing it.
Theory note: This is Kolb’s “concrete experience” stage and the AAR’s first two questions (“What was supposed to happen? What actually happened?”). Establishing shared facts before analysis prevents premature judgment.
Phase 2: Role Reflection (5 min)
Goal: Surface perspectives that might otherwise go unheard.
Use a structured rotation to prevent any single voice from dominating:
Individual reflection (1 min silent): “Take a moment to think about this from your role’s perspective. What did you see that others might have missed?”
Round-robin sharing (4 min): Each player shares for 30-60 seconds from their role’s viewpoint:
- Technical roles (analyst, detective): “What technical evidence was most and least useful? What would you investigate first next time?”
- Leadership roles (crisis manager, executive): “What information did you need that you didn’t have? Where did coordination help or hinder?”
- Communication roles (media, legal): “What stakeholder pressures affected technical decisions? Where did communication gaps create risk?”
Facilitator note: Use the “speak once before twice” principle – everyone shares once before anyone speaks a second time.
Theory note: Gibbs’ reflective cycle separates feelings from evaluation – emotions surfaced here won’t contaminate the analytical work in Phase 3. Boud’s model warns that unacknowledged negative feelings (embarrassment, frustration) actively block cognitive learning.
Phase 3: Gap Analysis (5-8 min)
Goal: Identify where game decisions diverged from ideal incident response.
“Now let’s dig into the difference between what happened and what would have been ideal.”
Key questions (pick 2-3 based on what emerged):
- “At what point did the team have enough information to act but hesitated? What caused the hesitation?”
- “Were there any moments where the team split on what to do? What drove the disagreement?”
- “What assumptions did the team make early on that turned out to be wrong? When did you realize it?”
- “If the [Malmon name] had behaved differently – say, a Cryptor instead of a Controllor – how would your response have changed?”
Facilitator note: Steer toward process gaps, not outcome evaluation. “We should have isolated the machine sooner” is an outcome. “We didn’t have a clear escalation trigger for when to isolate” is a process gap – and it’s transferable to real incidents.
Theory note: This is Kolb’s “abstract conceptualization” and the AAR’s third question (“Why was there a difference?”). The shift from specific events to transferable principles is where learning generalizes beyond the scenario.
Phase 4: Real-World Transfer (5-7 min)
Goal: Bridge from the game world to players’ actual workplaces.
“Everything we just discussed happened in the game. Now let’s make it real.”
Adaptation by session level:
- “Before today, how would you have described ‘incident response’ to a colleague? Has that changed?”
- “What’s one thing from this session you’d want your team to know about?”
- “If a real phishing email landed in your inbox tomorrow, what would you do differently than you would have yesterday?”
- “How does your organization’s actual IR process compare to what we just practiced?”
- “What gaps did this scenario expose in your team’s current playbooks or procedures?”
- “Which role in the game maps closest to your real job? What did playing it teach you?”
- “Where did the scenario’s constraints (time, information, team size) mirror your real operating environment?”
- “What systemic issues – not individual mistakes – would need to change in your organization to handle this threat better?”
- “If you had to brief your CISO on one insight from this session, what would it be?”
Phase 5: Commitment (5 min)
Goal: Every player leaves with a named, specific, written action.
“We’ve identified gaps, connected them to real work, and discussed what should change. Now make it personal.”
Distribute or display the Commitment Card template (below). Each player fills in:
- STOP: One thing to stop doing (an ineffective habit, a false assumption, a skipped step)
- START: One thing to start doing (a new process, a conversation, a tool)
- CONTINUE: One thing that worked well and should be reinforced
“Write these down. Then share your START item with the group – that’s your public commitment.”
The strongest commitments use the implementation intention format: “When [trigger situation], I will [specific action].” For example: “When I receive an email with an attachment from an unknown sender, I will check the sender domain before opening it.” This if-then structure ties the new behavior to a concrete trigger, which research shows significantly outperforms vague intentions like “I’ll be more careful with email.” Add a timeline and an accountability partner to complete the commitment.
Session-Level Adaptations
The protocol structure stays the same across all session types. What changes is the depth and vocabulary:
| Element | Beginner | Intermediate | Advanced |
|---|---|---|---|
| Event Recall | Focus on story moments (“when the alert fired…”) | Focus on decision points and trade-offs | Focus on process breakdowns and systemic gaps |
| Role Reflection | “What surprised you about your role?” | “How did your role’s priorities conflict with others?” | “Where did role boundaries create blind spots?” |
| Gap Analysis | “What would you do differently?” | “What process would you change?” | “What systemic issue would you escalate?” |
| Real-World Transfer | Awareness-level (“now I know what phishing looks like”) | Procedural (“our runbook is missing this step”) | Strategic (“our detection capability has this blind spot”) |
| Commitment | Personal habit (“I’ll verify URLs before clicking”) | Team process (“I’ll update our escalation checklist”) | Organizational (“I’ll propose a tabletop exercise cadence to leadership”) |
Facilitator Toolkit: Managing Difficult Debrief Moments
Time-Pressured Groups
Signal: Players checking watches, fidgeting, mentioning next meetings.
Tactics:
- Switch to the Short Format immediately – a 5-minute debrief with commitments beats a rushed 20-minute one.
- “I can see we’re tight on time. Let’s do the most important part: each person, one sentence – what’s the one thing you’re taking away from this?”
- Skip Phases 2-3 and go straight from Event Recall to Commitment.
Disengaged or Silent Groups
Signal: One-word answers, no eye contact, phone checking.
Tactics:
- Use pair-and-share instead of group discussion: “Turn to the person next to you and spend 60 seconds each on what surprised you.” Pairs are less intimidating than full-group sharing.
- Ask about the game mechanics, not learning: “What was the most interesting move someone made?” People engage with narrative before they engage with reflection.
- Use the Commitment Card as a silent writing exercise: “Take 2 minutes to fill this out. No need to share unless you want to.” Writing removes social pressure.
Groups That Want to Argue Technical Details
Signal: Debate over whether a specific tool would work, what the “right” forensic approach is.
Tactics:
- Acknowledge the expertise: “You clearly know this domain well.”
- Redirect to process: “Assume the technical approach works – what happens next in the response? Who needs to know, and when?”
- Use the executive brief redirect: “If you had 30 seconds to brief someone non-technical on what just happened, what would you say?”
Groups That Treat It as “Just a Game”
Signal: “Well, in real life we’d have better tools” or “This wouldn’t happen at our company.”
Tactics:
- Validate, then redirect: “You’re right that real environments differ. But the coordination challenges – unclear escalation paths, incomplete information, time pressure – those are universal. Where have you seen those in your actual work?”
- Ask about surprises: “Was there anything in the scenario that was more realistic than you expected?”
- Focus on the team dynamics, not the technical scenario: “Forget the Malmon for a second. How did your team handle disagreement under time pressure? Does that happen in real incidents too?”
One Person Dominating
Tactics:
- Enforce the rotation structure from Phase 2 – everyone speaks once before anyone speaks twice.
- Direct questions to specific people: “[Name], we haven’t heard your perspective yet – what did you notice from your role?”
- Use the written Commitment Card to ensure every voice is captured, even if not spoken.
Commitment Card Template
Print or display this for each player during Phase 5 (full format) or Step 3 (short format):
Measuring Debrief Effectiveness
Over multiple sessions, track whether the debrief is working:
| Signal | What It Means |
|---|---|
| Players name specific moments, not vague feelings | Event Recall is working |
| Different roles surface different perspectives | Role Reflection is working |
| Discussion focuses on process gaps, not blame | Gap Analysis is working |
| Commitments are specific and time-bound | The protocol is producing behavioral change |
| Players reference previous session commitments | Learning is persisting between sessions |
Cross-References
- Closing Script – Run after this protocol for emotional wrap-up, MalDex documentation, and community connection
- IM Quick Start Guide – Debrief timing fits within the session structure outlined here
- Workshop Facilitator Guide – Role-based facilitation techniques that complement the debrief rotation
- Emergency Protocols – Session Debrief Enhancement section for sessions that had significant challenges