Episode 36: Developing Engaging Information Security Awareness and Training Programs

Welcome to The Bare Metal Cyber CISM Prepcast. This series helps you prepare for the exam with focused explanations and practical context.
The primary purpose of control testing and evaluation is to confirm that security controls are implemented properly and function as they were intended. This verification process provides assurance to stakeholders, including auditors, that the organization’s security posture is sound and operating within expected boundaries. It also helps uncover gaps in coverage, misconfigured tools, or ineffective implementations that may have gone unnoticed during deployment. Regular testing supports continuous improvement efforts by informing updates, removals, or redesigns of controls throughout their lifecycle. Just as important, control testing ensures alignment with the organization’s compliance mandates, risk tolerance, and operational requirements, making it a core function of effective governance.
Control testing can be conducted in several forms, each serving a different aspect of evaluation. Design effectiveness testing asks whether the control, in its intended form, addresses the identified risk based on its structure and objective. Operational effectiveness testing, on the other hand, focuses on whether the control is functioning in actual environments and delivering the expected results in practice. Preventive controls are tested to determine whether they successfully block unauthorized actions before they occur, while detective controls are reviewed to verify their ability to detect incidents promptly and generate alerts as needed. Depending on the complexity and nature of the control, testing may be fully automated using scripts and tools, or manual, relying on human interaction, observation, or documentation.
Several established methods are commonly used to test and evaluate security controls. Walkthroughs involve reviewing the steps of a control with those responsible for executing it, allowing for direct confirmation and discussion. Document reviews verify the existence and accuracy of procedures, logs, system configurations, or other related artifacts. Technical tests simulate real-world scenarios, either by triggering expected control responses or attempting to bypass them. Interviews provide valuable insights into how controls are used in practice and whether responsible personnel understand their roles and obligations. Sampling techniques involve selecting a subset of records or actions for examination, helping assess whether the control functions consistently over time or across cases.
Establishing a formal control testing plan ensures that evaluation efforts are structured, targeted, and repeatable. The plan should define the scope of testing activities, the specific goals for each test, and how often each control will be reviewed. Controls should be prioritized based on risk levels, operational criticality, or external compliance demands. Responsibilities for designing the test plan, conducting evaluations, and managing oversight must be assigned clearly. Using standardized templates or documentation formats improves consistency across testing cycles. Finally, the testing calendar should be aligned with other key activities such as audits, regulatory assessments, and enterprise risk management timelines to maximize coverage and efficiency.
Evidence collection is a critical component of the control testing process. The type of evidence gathered may include system logs, screenshots, configuration files, access records, or observable user actions. These results must be compared against the stated objectives of each control and evaluated against any defined thresholds or performance indicators. To ensure credibility, all evidence must be authenticated, comprehensive, and clearly associated with a specific date and time. Any deviations, anomalies, or operational errors observed during testing must be fully documented and explained. Collected evidence should be stored securely and organized so that it can be retrieved for future reference, internal analysis, or external audit review when necessary.
Testing automated controls requires a specialized approach that accounts for the underlying technologies. Reviewers must examine the scripts, configurations, and embedded logic used by the automated system to confirm that it performs its intended function. They must also verify that integrations with other systems—such as identity management platforms, logging mechanisms, or alerting services—are functioning properly. Simulation techniques may be used to introduce failure conditions or deliberate bypass attempts to test the control’s resilience. It's also essential to confirm that the control behaves consistently across different system environments and operational contexts. Throughout testing, teams should monitor for false positives, missed detections, or performance degradation that could affect business operations or user experience.
Manual controls require a different approach that emphasizes observation and evaluation of human behavior. Testers must observe staff members performing their assigned control steps during regular operations to see how consistently and accurately the control is being applied. The evaluation must account for any dependence on individual judgment, as well as how procedural consistency is maintained when tasks are performed manually. Interviews and discussions help assess whether personnel understand their roles, responsibilities, and the intended outcomes of the control. Documentation must be checked to confirm it is current, accurate, and actually followed in practice. Any gaps in training, unclear instructions, or misunderstandings that hinder control performance should be identified and addressed.
Once testing is complete, the results must be documented and reported in a structured, accessible format. Each report should clearly summarize the scope of the test, the methods used, and the personnel responsible for execution and oversight. Successes—where controls performed as expected—must be noted alongside any nonconformities or failures. Findings should be prioritized based on their potential impact, risk exposure, or compliance relevance, helping management understand which issues demand urgent attention. Recommendations should be made for how to remediate any deficiencies or improve the control’s design or implementation. Final reports must be distributed to appropriate risk owners, audit staff, and governance stakeholders to inform oversight and decision-making.
Responding to control failures begins with launching a corrective action plan that defines exactly what will be done, who will do it, and when it will be completed. The nature of the failure must be evaluated to determine whether it stems from a flaw in the control’s design, a problem with its operation, or an external factor such as user behavior. In some cases, temporary compensating controls may be necessary to manage the risk while long-term fixes are implemented. It is essential to monitor follow-up activities to ensure the issue does not recur or point to deeper systemic weaknesses. If a failure is particularly severe or exposes significant risk, it must be escalated promptly to leadership and compliance functions for visibility, support, and documentation.
Effective control testing is not a one-time event—it is an ongoing governance responsibility that contributes to a culture of accountability and improvement. Organizations should establish metrics that track control effectiveness and the outcomes of control tests, providing data for performance dashboards and risk reports. Lessons learned from each test cycle should be analyzed and used to refine how controls are selected, implemented, and communicated throughout the organization. Testing approaches themselves must evolve over time, incorporating new tools and methods that reflect changes in threat landscapes, technology, and business practices. Above all, control evaluation must be sustained as an integral part of the security program, supporting strategic goals and regulatory readiness while strengthening the organization’s resilience against risk.
Thanks for joining us for this episode of The Bare Metal Cyber CISM Prepcast. For more episodes, tools, and study support, visit us at Bare Metal Cyber dot com.

Episode 36: Developing Engaging Information Security Awareness and Training Programs
Broadcast by