This version includes scenarios addressing AI voice cloning, deepfake impersonation, seasonal attack patterns, and platform-specific vectors emerging in 2025.
Social engineers succeed not by exploiting technical vulnerabilities, but by manipulating our natural human emotions and instincts.
This assessment will help you recognize Emotional Indicators of Compromise (EIOCs)—the psychological states that attackers create to bypass your rational defenses.
You'll learn about six critical emotional vulnerabilities and develop practical stewardship skills to protect yourself and your organization.
AI voice cloning now replicates tone, cadence, and verbal habits with seconds of sample audio. Attackers target the colleague who is usually unflappable—hyper-competence, not exhaustion.
You receive a voice call from "Maya," your famously composed project manager. Her voice is flawless—tone, cadence, even her dry humor. She says:
"I'm in a client meeting and can't access the shared drive. Can you send me the updated contract PDF? Just email it to this address—it's the client's secure portal."
Tax season scams exploit guilt, urgency, and fear of penalties. Modern variants use correct employer branding scraped from LinkedIn and AI-generated "proof" documents.
During tax season, you receive an email from "Payroll Support" saying:
"We accidentally over-refunded your tax adjustment. Please return the excess amount today to avoid IRS penalties."
The email includes a professional PDF with your name, employee ID, and what appears to be your W-2 information.
Healthcare coverage scams exploit fear of losing protection and responsibility for dependents. Modern variants use cloned benefits portals and reference real enrollment deadlines.
During open enrollment, you receive a text message:
"URGENT: Your dependent coverage is incomplete. Submit your SSN and dependent info to finalize enrollment before the deadline. Failure to complete may result in coverage gaps for your family."
The link leads to a benefits portal that looks almost identical to your company's.
Tech industry attacks exploit professional responsibility and fear of being the bottleneck. Attackers use correct internal jargon and create artificial urgency around security reviews.
You receive a Slack DM from someone appearing to be from "Security Engineering":
"We detected anomalous model behavior. Need your API logs ASAP for the safety review. Upload here: [external link]. This is time-sensitive—the safety team is standing by."
The message uses your team's exact terminology and references a real project name.
Attackers exploit curiosity and FOMO by leveraging AI-generated meeting summaries. File names mimic legitimate productivity tools, and the casual tone bypasses suspicion.
You receive a Teams message from a colleague:
"Here's the AI-generated summary of yesterday's leadership meeting. Let me know if anything looks off."
Attachment: MeetingSummary_AutoGen.docx
The message uses correct internal terminology—but you realize this colleague wasn't in that meeting.
AI-generated "parent voice" emails exploit teacher empathy and duty of care. Attackers scrape class events from newsletters to build context and bypass parent portal verification.
A teacher receives an email from a Gmail address:
"I'm worried about my daughter Sarah's behavior lately, especially after that field trip last week. Can you send me her last two assignments and any notes you have? I want to understand what's going on before our conference."
The email references a real class event and uses an emotionally intimate tone.
This assessment measures your awareness of emotional manipulation patterns that social engineers commonly exploit. Your readiness level reflects how well you recognize and respond to these tactics.