Skip to content

Services, Wiki-Artikel, Blog-Beiträge und Glossar-Einträge durchsuchen

↑↓NavigierenEnterÖffnenESCSchließen

Security metrics and KPIs: making security measurable

Security metrics are proof that investments in IT security are effective. This article explains which KPIs are relevant for operations (MTTD, MTTR, FP rate), vulnerability management (patch compliance, MTTR), awareness training (phishing click-through rate), compliance (audit compliance rate), and strategic board reports—complete with specific target values and calculation formulas.

Table of Contents (6 sections)

"You can't manage what you can't measure" is especially true for IT security. Without metrics, no one knows whether the security program is effective, whether investments are justified, or whether risk is increasing or decreasing. This article provides a comprehensive KPI framework for various security areas.

Why Metrics Fail (and How to Do It Better)

Common mistakes with security metrics:

Mistake 1: Activity metrics instead of outcome metrics
  Bad: "We patched 150 vulnerabilities this month"
  Good:      "Critical patch compliance rose from 73% to 91%"

Mistake 2: Too many metrics
  → 50 KPIs → no one understands what’s important
  → Solution: 5–7 strategic KPIs + 15–20 operational metrics

Mistake 3: Metrics without a baseline and target
  Bad: "MTTD: 48 hours"
  Good:      "MTTD: 48h (Baseline: 96h, Target: <24h, Trend: ↓)"

Mistake 4: Wrong audience
  → Showing technical metrics to the board → Eye-rolling
  → Board metrics: Risk, Business Impact, EUR
  → Ops metrics: MTTD, FP rate, incidents per week

Good metrics framework:
  → Define the goal (what needs to improve?)
  → Establish a baseline (measure the initial state)
  → Set a target value (realistic and ambitious)
  → Determine measurement method (how is it measured, by whom, how often?)
  → Reporting cadence: daily/weekly/monthly/quarterly

Security Operations KPIs

Detection & Response Metrics:

MTTD (Mean Time to Detect):
  Definition: Time from the start of the attack to detection
  Formula:     Σ(Detection time - Start of attack) / Number of incidents
  Benchmark:  IBM DBIR 2024: 204 days (!)
  SOC Target:   < 24 hours for critical incidents
  Best-Class: < 1 hour
  Measurement:    SIEM timestamp of incident detection vs. forensic analysis

MTTR (Mean Time to Respond/Contain):
  Definition: Time from detection to containment
  Formula:     Σ(Containment time - Detection time) / Number of incidents
  Benchmark:  Industry: 73 days to containment (IBM DBIR)
  SOC Goal:   < 4 hours for critical incidents
  Measurement:    Ticket system: "Incident detected" → "Containment confirmed"

MTTF (Mean Time to Fully Remediate):
  Definition: Time from detection to full remediation
  Formula:     Σ(Close time - Detection time) / Number of incidents
  Target:       Critical < 24h, High < 7 days, Medium < 30 days

Alert Quality Score:
  Definition: Percentage of true positives among all alerts
  Formula:     True Positives / (True Positives + False Positives) × 100
  Target:       > 70% (< 70%: Alert Fatigue!)
  Why:      Too many FPs → Analysts ignore alerts → blind

Incident Volume:
  Definition: Number of incidents per period by severity
  Trend is more important than absolute number!
  Critical: < 5/month (Target)
  High:     < 20/month
  Medium:   Baseline × 1.2 (sharp increase = problem)

---

Table for SOC Reporting:

KPI                 Current    Previous Month   Target      Trend
MTTD (critical)     18h        24h        <12h      ↑ good
MTTR (critical)     3.5h       4.8h       <4h       ↑ good
Alert Quality       68%        61%        >70%      ↑ good
FP Rate             32%        39%        <30%      ↑ good
Incidents (Crit.)   2          4          <5        ↑ good
SIEM Coverage       87%        82%        >95%      ↑ good

Vulnerability Management KPIs

Patch & Vulnerability Metrics:

Patch Compliance Rate:
  Definition: % of systems with patch installed within SLA
  Formula:     Patched systems / All affected systems × 100
  Target:       Critical: >99%, High: >95%, Medium: >90%
  Critical:   Critical patches: 24h external, 72h internal
  Measurement:    Vulnerability scanner post-scan comparison

Mean Time to Remediate (Vulnerability):
  Definition: Average time from discovery to patch
  Formula:     Σ(Patch date - Discovery date) / Patches
  Target:       Critical: < 3 days, High: < 14 days, Medium: < 45 days

Vulnerability Discovery Rate:
  Definition: Time from CVE publication to detection in the system
  Target:       < 48 hours (for Critical CVEs)
  Measurement:     NVD publication date vs. scanner result

Overdue Findings:
  Definition: Number of findings outside SLA
  Target:       0 Critical/High outside SLA
  Escalation: Any violation → automatic CISO notification

Attack Surface Trend:
  Definition: Trend of open findings over time
  Visualization: Burn-down chart by severity
  Interpretation:
    Rising:  New vulnerabilities > fixes (alarming)
    Stable: Steady state (acceptable)
    Declining:   Active reduction (optimal)

Scanner Coverage:
  Definition: % of known assets that are scanned
  Formula:     Scanned IPs / Total IPs × 100
  Target:       >95%
  Risk at <90%: Blind spots in the vulnerability program

---

Dashboard Example (Monthly):

Critical Vulnerabilities:
  Newly discovered:    12
  Resolved:         15
  Open:            8 (of which 2 with special approval)
  Compliance Rate: 97% (Target: >99%)

High Vulnerabilities:
  New:  45 | Fixed: 51 | Open: 23 | Compliance: 93%

Medium:
  New: 180 | Fixed: 145 | Open: 234 | Compliance: 88%

Security Awareness Metrics

Phishing Simulation KPIs:

Click-through rate:
  Definition: % of recipients who click on a phishing link
  Industry average: 17–25% (without training)
  Target after training: < 5%
  Best-in-class: < 2%
  Warning Sign: > 15% after 6 months of training

Report Rate:
  Definition: % of recipients who correctly report phishing
  Target: > 30% (report first, then click)
  Why it matters: Reporting is more important than not clicking!
  Measure effectively: only genuine reports count (not "what was that?")

Time to Report:
  Definition: Time from phishing receipt to report
  Target: < 30 minutes
  Best-in-Class: < 5 minutes (real-time defense!)

Repeat Clicker Rate:
  Definition: % of users who click on phishing multiple times
  Target: < 3% after training
  Action: Repeat clickers → intensive training (1:1)

---

Training Completion Metrics:

Mandatory Training Completion:
  Definition: % of employees who have completed the mandatory course
  Target: 100% (Compliance requirement!)
  Monthly report to management

Knowledge Retention:
  Definition: Test score 30/60/90 days after training
  Measurement: Follow-up quiz (automatically in LMS)
  Target: >80% retention after 30 days

Training Effectiveness (Phishing):
  Definition: Click-through rate before training vs. 3 months afterward
  Calculation: (Before - After) / Before × 100
  Example: 23% → 7% = 70% reduction → Training works!

Compliance and Audit Metrics

Compliance KPIs:

Control Effectiveness Rate:
  Definition: % of controls implemented in compliance with audit requirements
  Calculation: Implemented Controls / Total Controls × 100
  Target: >95% for critical controls

Open Audit Findings:
  Definition: Number of unresolved audit findings by severity
  Critical:   0 open for over 30 days (Mandatory!)
  High:       0 open for over 90 days
  Medium:     < 10 open

Remediation Rate:
  Definition: % of audit findings resolved on time
  Target: >90%

Risk Register Health:
  Definition: % of risks with a recent review (< 6 months old)
  Target: 100% (ISO 27001 requires regular review)

Policy Exceptions:
  Definition: Number of active policy exceptions
  Target: Declining trend
  Maximum: Exceptions > 6 months old → Escalation

Management and Board Reporting

Board-Level Security Metrics (non-technical!):

1. Security Risk Rating (traffic light system):
   RED:    Critical unpatched risk OR active incident
   YELLOW:   SLA violation OR compliance gap
   GREEN:   All KPIs within target range
   → Board members want: trend, not details

2. Cyber Insurance Readiness:
   → Are insurance requirements met?
   → Impact on premium: which measures reduce it?
   → Benchmark: where do we stand compared to the industry?

3. Cost of Security vs. Cost of Breach:
   Security budget:   150,000 EUR/year
   Average breach cost for SMEs: 3.5 million EUR (IBM 2024)
   ROI = (3,500,000 × probability reduction) / 150,000
   → Concrete business case for security investments

4. Regulatory Compliance Status:
   NIS2:    In implementation (75% of requirements met)
   ISO 27001: Certified, next audit: Q3 2026
   GDPR:   Last data breach: none (18 months)

5. Incident Summary (Board-level language):
   "This month: 2 security incidents. Both contained within 4 hours.
    No data loss. No customer impact. Cause resolved."

---

Reporting Frequency:

Daily (Ops):      Alert Queue, open incidents
Weekly (CISO): Vulnerability trends, patch SLA
Monthly (IT Management): KPI dashboard, trend charts
Quarterly (Board): Risk rating, compliance, ROI

Questions about this topic?

Our experts advise you free of charge and without obligation.

Free Consultation

About the Author

Oskar Braun
Oskar Braun

Abteilungsleiter Information Security Consulting

E-Mail

Dipl.-Math. (WWU Münster) und Promovend am Promotionskolleg NRW (Hochschule Rhein-Waal) mit Forschungsschwerpunkt Phishing-Awareness, Behavioral Security und Nudging in der IT-Sicherheit. Verantwortet den Aufbau und die Pflege von ISMS, leitet interne Audits nach ISO/IEC 27001:2022 und berät als externer ISB in KRITIS-Branchen. Lehrbeauftragter für Communication Security an der Hochschule Rhein-Waal und NIS2-Schulungsleiter bei der isits AG.

ISO 27001 Lead Auditor (IRCA) ISB (TÜV)
This article was last edited on 04.03.2026. Responsible: Oskar Braun, Abteilungsleiter Information Security Consulting at AWARE7 GmbH. License: CC BY 4.0 - free use with attribution: "AWARE7 GmbH, https://a7.de"

Cookielose Analyse via Matomo (selbst gehostet, kein Tracking-Cookie). Datenschutzerklärung