I’m continuing my long series of posts that describe how to implement an information security program. Currently, we’re in the section called “How to Measure Cyber Risks.”
Last week, I described how you can set up a single place to record the scores you’ll gather.
Now, your next major decision is to choose the level of data quality you need. This will be determined by several factors:
- Your organizational risk appetite,
- Management style,
- Available budget and timeline,
- And, your internal culture.
Make sure to discuss this choice thoroughly with your boss as it’s crucial to have their support when the data is analyzed, and you bring back the top risks. In my experience, there are three levels of data quality to choose from.
With the “Good Quality” option you will briefly train your experts and they will submit their scores using an online survey tool, such as Survey Monkey or maybe you’ll use an internal survey tool.
When you have dozens or hundreds of experts to work with, this is the most scalable and least expensive data gathering option. It will generate actionable data assuming your experts participate fully and sincerely without fear of retaliation for pointing out problems.
Some people have doubts about the usefulness of self-reported scores. And, there is some cause for concern.
Self-reported answers may be exaggerated. And some experts may be too embarrassed to tell the truth thinking it will reflect poorly on them and their work. Also, experts are inherently biased by their feelings at the time they fill out the questionnaire.
You can address these concerns with additional training, arranging for their supervisors to encourage honesty, or by choosing one of the higher data quality levels.
With the “Better Quality” option, you briefly train your experts and then interview them, either in person or by voice or video call. This approach requires two data collection hours per expert.
This choice is good when you suspect the experts are unable to participate fully and sincerely using the online response method. The interview allows you to watch for signs during your interaction that your expert is giving you biased responses. If you suspect that’s happening, take a curious tone and ask the expert to explain more fully why they are giving you that particular score. The more they talk, the closer you’ll get to reality.
With the “Best Quality” approach you briefly train your experts and then interview them either in person or by voice or video call. And then they provide evidence to justify their scores, which you should gather as soon as possible.
This approach requires 50% to 100% more data collection hours than the “Better Quality” choice, mostly for the follow-up to collect the evidence. You might also choose this option as a readiness assessment for an upcoming audit or standards-based certification.
Next week, I’ll describe my workflow for collecting scores from experts.
Cyber Risk Opportunities provides middle market companies with cost-effective Cyber Risk Managed Programs to prioritize and reduce your top cyber risks, including the specific requirements of PCI, HIPAA, SOC2, ISO 27001, DFARS, and more.