Episode 33: Designing and Selecting Effective Information Security Controls

Welcome to The Bare Metal Cyber CISM Prepcast. This series helps you prepare for the exam with focused explanations and practical context.
Security metrics exist to provide measurable evidence that controls are functioning and the overall program is delivering on its intended goals. They give leadership and governance bodies a clear, data-backed foundation for decision-making, moving beyond assumptions or anecdotes. When designed properly, metrics can demonstrate how security efforts are aligned with broader business objectives, showing that risk mitigation is directly supporting operational resilience and regulatory compliance. They also help track patterns over time, identifying anomalies or areas of improvement that support continuous program evolution. By translating complex risks into clear, quantitative indicators, metrics make it easier to communicate challenges and achievements to stakeholders who may not have a technical background.
To be useful, a security metric must be relevant to the organization’s strategic priorities or regulatory obligations, ensuring that the data collected actually supports oversight and progress. It must also be quantifiable, expressed in numeric terms or on a clearly defined scale so that it can be consistently interpreted. A good metric leads to action—whether that means adjusting a control, reallocating resources, or launching an investigation. For metrics to show meaningful trends, they must be measured the same way over time, using a repeatable process that removes subjectivity or variability. Finally, stakeholders at all levels must be able to understand what each metric represents and how it relates to their responsibilities, without requiring expert interpretation.
Different types of metrics are used to reflect different aspects of a security program’s operation and maturity. Operational metrics monitor daily activity, such as the number of incidents handled, how quickly patches are applied, or how many users completed training. Risk metrics provide insight into the organization’s exposure, the current treatment status of identified risks, and the presence of any control deficiencies. Compliance metrics measure adherence to internal policy requirements or external regulatory standards, such as audit completion rates or control testing results. Maturity metrics focus on how deeply and broadly controls have been implemented, helping assess whether key areas are fully operational or still under development. Business impact metrics go a step further by connecting security outcomes—such as system uptime or breach frequency—to operational performance indicators.
Metrics must be tightly aligned with the overall objectives of the information security program. This alignment begins by mapping each metric to specific strategic goals, risk tolerance levels, and compliance mandates, ensuring the data gathered serves a defined purpose. Priority should be given to metrics that support decision-making at the domain level, whether in governance, risk, program management, or incident response. Collecting metrics simply because the data is available leads to wasted effort and distracts from meaningful insights. Using a top-down approach ensures that metrics are selected based on business alignment rather than ease of collection. For each metric, there should be a clearly identified owner who is responsible for maintaining its integrity and reporting its status at appropriate intervals.
Designing a metrics framework begins with defining exactly what will be measured, how frequently the data will be collected, and which systems or platforms will provide the source information. Once those elements are determined, thresholds must be established to define acceptable performance, and baselines must be set for comparison. Responsibility for gathering, validating, and reporting each metric must be assigned to specific roles or teams to ensure accountability. Every metric should include documentation outlining the definitions used, the methods of calculation, and any assumptions or limitations that affect its interpretation. To keep the framework effective over time, a formal governance process must be in place to review and update the metric set in response to evolving risks, new controls, or changes in business objectives.
Automation plays a critical role in the collection of security metrics. Integrations with security information and event management tools, ticketing systems, or governance platforms can reduce manual labor and improve accuracy. However, every data source must be validated to ensure that the data it provides is complete, accurate, and collected with integrity. Once gathered, metric data should be stored in a centralized system or dashboard that allows for visualization and analysis across the organization. Where data is collected from different departments or business units, normalization is required to allow fair comparisons and trend analysis. Because some metrics may involve sensitive operational or risk-related data, access controls must be in place to ensure only authorized individuals can view or manipulate the information.
How metrics are reported depends on who the audience is. Reports intended for operational teams will likely focus on raw counts, trends, and exceptions that inform daily decisions. Management-level reports may highlight performance against targets, emerging risks, and areas that require additional attention. Executive-level summaries focus on strategic alignment, program impact, and key risk indicators. Visuals like bar charts, heat maps, or trend lines are useful tools to simplify complex data and highlight key findings. Each report should call out major insights, anomalies, and any recommended follow-up actions. The frequency of reporting must also match the needs of the audience—some metrics may require monthly updates, others quarterly, and some only in response to specific events or audits.
Metric programs must be regularly evaluated to ensure they continue to serve their intended purpose. A good starting point is determining whether the data actually supports decision-making and enhances program visibility. Metrics that are no longer used or fail to drive action should be retired to reduce reporting noise. Stakeholder feedback is an important input in this process—if users find certain metrics confusing or irrelevant, adjustments should be made. Benchmarking against industry standards or similar organizations helps validate the organization’s metric portfolio and identify areas for enhancement. Metrics should also be tested against real incidents or audit results to confirm that they accurately reflect security posture and operational performance under real-world conditions.
Security metric programs face a range of common challenges. One of the most significant is over-collection—gathering excessive data without a clear plan for its use, which burdens staff and dilutes focus. Another is the lack of standardized definitions, leading to inconsistent interpretation across different parts of the organization. Poor data quality, whether due to incomplete records, system errors, or human input mistakes, can undermine the credibility of the entire metric program. It is also often difficult to connect technical metrics—such as system uptime or patching delays—to broader business value in a way that leadership understands. Finally, if metric outputs are not directly tied to organizational priorities or do not support specific decisions, they risk becoming irrelevant or ignored.
To keep a metrics-driven program effective over time, it must be sustained through intentional governance and cultural reinforcement. An annual review of the entire metric portfolio should be conducted to confirm alignment with evolving risks, regulatory demands, and business strategies. Metrics must be embedded into routine program reviews, performance evaluations, and planning discussions so that they remain top of mind for stakeholders. Training should be provided to help staff understand how to interpret metrics and apply them in their daily work. Where possible, metrics should be connected to ongoing improvement projects, budget planning, or resource allocation decisions. Above all, leadership should promote a culture that values transparency, takes action based on data, and treats metrics not just as reports—but as operational tools for managing and improving security outcomes.
Thanks for joining us for this episode of The Bare Metal Cyber CISM Prepcast. For more episodes, tools, and study support, visit us at Bare Metal Cyber dot com.

Episode 33: Designing and Selecting Effective Information Security Controls
Broadcast by