Why this article?
We are always told that reading books is something you must do, that you must read books to improve a wide range of things from vocabulary to emotional intelligence. For some, me included, that can result in reading fatigue and books being read for the wrong reasons. So, I tried to dig deep and figure out what motivates my reading, which I concluded was (1) enjoyment/relaxation and (2) personal development in something that I want to learn more about. I concluded that what I wanted to learn more about was cybersecurity.
If you are interested in cybersecurity, chances are that you already have your favorite go-to sources for learning what you want, whether it's related news (my favorite source of cybersecurity news is the daily CyberWire podcast) or in-depth articles. My problem with this kind of learning is that it rarely explores a topic more thoroughly than what you can absorb in less than 30 minutes. Let's face it, that's a bit like reading the summary of the book you should've read for that seminar but didn't. The benefits of learning by reading books is that books are not constrained by these time limits. Books are a bigger time investment spread over a longer period, but also, at least for me, much more rewarding.
In an effort primarily directed at motivating myself to really understand what I'm reading I'll try to give at least one monthly recommendation on a book related to cybersecurity and/or information security. I hope that someone might find some benefit from reading these recommendations.
How To Measure Anything In Cybersecurity Risk
By: Douglas W. Hubbard & Richard Seiersen
Hubbard is renowned for being the author of How To Measure Anything: Finding the Value of “Intangibles” in Business and The Failure of Risk Management: Why It’s Broken and How to Fix It. The latest installment in the How To Measure Anything series could be summarized as a book explaining the problem with measuring cybersecurity risk with the infamous risk matrices and how to use an alternative method based on statistics.
So why is this relevant? For me the answer is that I always found risk matrices to be quite meaningless and confusing. When I studied cybersecurity at university we were presented with examples of above mentioned risk matrices and we were given a summary of how they are supposed to be used and why. After this we were divided into groups and tasked with using these risk matrices to determine some kind of risk level for certain risks we were provided with.
Total confusion then ensued, we found the idea of using an arbitrary scale consisting of 1-5 where 1 represents 'insignificant' and 5 'catastrophic' very confusing. I'm not ashamed to say that we went down the road that we considered safe, e.g. claiming that 90% of the risks belonged to 3 (Moderate risk) on the risk matrices. The risks that to us seemed extra severe were given a 5, just to be 'safe'.
Now you might say that it's not very strange that a group of students with no real experience of cybersecurity would mess up badly when determining the appropriate risk level for certain risks. This is fair criticism, but as Hubbard & Seiersen argues this poor use of risk matrices is not limited to students or junior employees. In fact, the authors present studies showing that no matter how experienced the professional, risk matrices are generally not good. Even when a cybersecurity professional feels like they know the appropriate risk level for a certain type of risk based on their previous experiences, the authors argue that the cybersecurity professional will have a quite selective memory which does not represent an objective view of the risk level. Instead they argue that using simple statistical models we can eliminate as much bias as possible when measuring cybersecurity risks, and they make a convincing case for their methods.
In short, the book could be divided into three major parts:
What's wrong with current best practices of measuring risk?
Why you should use a simple quantitative solution instead
How you should use above mentioned solution
Anything arguing the use of statistical models will be sure to frighten certain people off from using that method, which is hardly surprising as we tend to view statistics as something difficult. The solution that the authors present is not more difficult than using the risk matrices today, and arguably much more beneficial in your efforts to measure risk. The book also does a lengthy debunking of the usual arguments presented against using statistics for measuring cybersecurity risk, and the authors makes a good case here as well.
The book is marketed with the question What if your single biggest cybersecurity risk was the risk assessment method itself? and based on the arguments presented in the book this is a valid question. The authors argue that using standard industry best practices might, due to bias, even be worse than assigning random risk levels. I would recommend anyone who's a stakeholder in the risk assessment and risk management process in an organization to read this book. We need better methods for measuring risk, we need them fast and we need to have support from stakeholders to use them.
TLDR: Book about measuring risk, argues that standard industry best practices might be worse than randomly assigning risk levels. Debunks arguments against using quantitative solutions and teaches you how to use statistics in risk measurement. Also provides the tools you need. Read this book if you are a stakeholder in an organizations risk assessment and risk management process. Don't fear statistics.