Episode 32: Developing and Using Information Security Program Metrics

Welcome to The Bare Metal Cyber CISM Prepcast. This series helps you prepare for the exam with focused explanations and practical context.
Security programs depend on measurement. Without metrics, decisions are based on assumptions, control performance remains anecdotal, and improvements are difficult to justify. Effective metrics provide measurable evidence that shows how well the security program is functioning. They demonstrate whether controls are working, whether risks are decreasing, and whether security initiatives are aligned with organizational priorities. Metrics bridge the gap between policy and action, between security outcomes and business expectations.
For leadership and governance bodies, metrics support informed decision-making. They provide visibility into how security supports enterprise goals and help translate complex risks into understandable and actionable data. Metrics are also critical for tracking trends, identifying anomalies, and driving continuous improvement. A spike in incidents, a drop in patch rates, or a trend in control failures may reveal emerging risks or resource constraints. With the right metrics, security leaders can speak the language of executives—quantitative, evidence-based, and results-focused.
Effective metrics have five core characteristics. First, they must be relevant. Every metric should align with a business objective, regulatory requirement, or risk management goal. Irrelevant metrics consume time and attention without delivering value. Second, metrics must be quantifiable. If you can’t measure it in numbers or on a defined scale, it can’t be tracked, compared, or analyzed reliably.
Third, metrics must be actionable. Good metrics drive decisions. They show where action is needed and help prioritize that action based on urgency or impact. Fourth, they must be repeatable. A metric that changes in definition every month cannot support trend analysis. Finally, they must be understandable. Stakeholders must be able to interpret what the metric means and why it matters. This clarity is especially important when communicating with non-technical audiences like boards and executives.
Security metrics fall into several categories, each serving different purposes. Operational metrics track routine activities and control tasks. Examples include incident volume, patch completion rates, or response time to alerts. These metrics are useful for day-to-day management and process improvement. Risk metrics reflect exposure. They may include the number of open critical vulnerabilities, residual risk scores, or the percentage of high-risk assets with outdated controls.
Compliance metrics monitor adherence to policies, standards, and regulatory obligations. These may include audit pass rates, control coverage levels, or training completion percentages. Maturity metrics assess how deeply and broadly controls are implemented across systems or business units. They support long-term planning and benchmark progress. Business impact metrics show how security affects operational outcomes—such as downtime avoided, fraud reduced, or regulatory penalties prevented. These metrics connect security to core organizational performance.
Metrics must align with the program’s strategic goals. Start by mapping each metric to a specific objective, risk, or compliance need. If the business values uptime, track how security supports availability. If leadership is focused on regulatory compliance, prioritize metrics that demonstrate control assurance. Use a top-down approach. Begin with enterprise goals, then define supporting risk and control objectives, and finally identify metrics that reflect progress.
Each metric must have an owner—someone responsible for data collection, validation, and reporting. This ownership ensures accountability. Avoid collecting metrics that do not support decisions or reporting obligations. If no one uses the data, or if it doesn’t prompt action, the metric should be reevaluated. Every metric must justify its place in the portfolio by supporting visibility, communication, or strategic alignment.
Designing a metrics framework involves more than selecting a few numbers. Define exactly what will be measured, how it will be measured, and how often. Identify the data source for each metric—whether it’s a SIEM, a ticketing system, or a GRC platform. Set thresholds, baselines, and targets. What value is acceptable? What value triggers escalation? What value represents success?
Document the data definitions, calculation methods, and assumptions behind each metric. This documentation supports consistency, especially across business units. For example, define whether a “closed incident” means the ticket is closed or the root cause is resolved. Roles for collection, analysis, and reporting must be defined. Review governance teams should oversee the metrics portfolio to ensure it remains current, complete, and strategically aligned.
Tool integration is essential for efficient and accurate metric collection. Use automation where possible. SIEM systems, IT service management platforms, and vulnerability scanners can generate real-time metrics. GRC platforms can consolidate data, apply thresholds, and feed dashboards. Validate data for accuracy. Incomplete or incorrect data undermines credibility and wastes effort.
Establish a central repository for metrics. Dashboards provide visualization and support regular reviews. Normalize data across departments. This allows for apples-to-apples comparisons between units or systems. Protect metric data. Some metrics—such as user behavior analytics or audit logs—may be sensitive. Apply access controls based on roles and reporting needs.
Reporting must be tailored to the audience. Operational teams need detail. They benefit from metrics that show trends in alerts, patch status, or failed logins. Reports should include action steps, status updates, and deadlines. Use charts, graphs, and trend lines to make reports easy to interpret quickly. Executive and board audiences require summaries. Focus on strategic risks, business impact, and assurance.
Use high-level visuals such as heat maps or scorecards. Highlight whether performance is within risk appetite thresholds. Explain spikes or variances. Was the increase in alerts due to a threat actor or a new detection rule? Always provide recommended actions. Reports should lead to decisions, not just display data. Set the right cadence for reporting. Some metrics are reviewed monthly, others quarterly, and some only during annual board meetings.
Evaluating your metrics portfolio is just as important as using it. Ask whether the metrics actually support better decisions. Do they improve visibility? Do they reduce uncertainty? Retire metrics that are no longer useful. If they don’t drive action, they don’t belong in the dashboard. Solicit feedback from users and stakeholders. Are the metrics clear? Are they used?
Benchmarking is another tool. Compare your metrics to industry standards or peer organizations. This helps validate whether your thresholds are reasonable or whether your metrics portfolio is missing key indicators. Use audits, incident reviews, and root cause analyses to validate the relevance of your metrics. If controls failed but no metrics flagged the issue, that gap needs to be closed.
Security metrics programs also face challenges. Over-collection is common. Many organizations track more data than they need, creating noise. This clutters dashboards and distracts from key signals. Another challenge is inconsistent definitions. Different teams may define “incident,” “resolution,” or “critical” differently, making comparisons meaningless.
Data quality is a perennial issue. Metrics built on flawed data mislead rather than inform. Difficulty in connecting technical metrics to business outcomes is another barrier. Security leaders must learn to tell the story. They must explain how a drop in phishing resilience affects fraud risk or how a delay in patching increases exposure. Lastly, metrics may misalign with strategic goals. If leadership is focused on resilience and risk reduction, technical metrics must connect to those themes.
To sustain a metrics-driven program, treat metrics as part of the governance cycle. Review your metrics portfolio annually or after significant business or regulatory changes. Embed metrics into program reviews, strategy sessions, and budgeting processes. Train staff to interpret metrics. Help them connect the numbers to decisions.
Link metrics to improvement plans. If a metric reveals weak control performance, document how it will be addressed. Promote a metrics culture. Use transparency, visibility, and leadership engagement to create a culture where metrics are not feared or ignored but valued and used. Metrics must be woven into how the security program operates, communicates, and evolves.
Strong metrics programs allow security leaders to move from reactive firefighting to strategic risk management. They empower data-driven planning, strengthen governance, and build credibility across the organization. For CISM professionals, mastering the development, selection, and communication of metrics is essential to leading a successful, modern security program.
Thanks for joining us for this episode of The Bare Metal Cyber CISM Prepcast. For more episodes, tools, and study support, visit us at Bare Metal Cyber dot com.

Episode 32: Developing and Using Information Security Program Metrics
Broadcast by