Every child who ever plays a board game understands that rolling dice results in unexpected results. In fact, this is why children’s board games use dice in the first place: to ensure a random result (from the point of view of the die, at least) about the same probability that it is thrown every time it dies. goes.
Consider for a moment what would happen if someone replaced the dice used in those board games with weighted dice – say dice that were 10 percent more likely to come “6” than any other number .
Will you notice The realistic answer is probably no. You’ll probably need hundreds of dice rolls before you don’t think anything fishy about the results – and you’ll need thousands of rolls before you can prove it.
A large innings, such as in large part, because the result is expected to be uncertain, makes it almost impossible to separate a level playing field from a biased at a glance.
This is also true in security. Safety consequences are not always purely deterministic or directly causal. This means, for example, that you can fix everything and still be hacked – or you can’t do anything right and, through sheer luck, avoid it.
The business of safety, then, lies in increasing the odds of desirable outcomes while reducing the odds of undesirable people. It is more like playing poker than following a recipe.
It has two rules. The first is the truism that every businessman learns quickly – that it is difficult to calculate the security return on investment.
A second and more subtle implication is that the slow and non-obvious imbalance of barriers is particularly dangerous. It is difficult, difficult to correct, and you can be sensible without weakening your efforts. Unless you have planned and baked into the mechanism to monitor for him, you probably won’t see it – let alone have the ability to correct for it.
Now, if this lack of security control / counteracting efficacy sounds far-fetched to you, I argue that there are actually many ways that efficacy can be slowly eroded over time.
Consider first that the allocation of employees is not stable and team members are not moldy. This means that a reduction in staffing may cause fewer touchpoints to a given tool or control, reducing the usefulness of the tool in your program. This means that a real impact of responsibilities can affect effectiveness when one engineer is less skilled or has less experience than another.
Likewise, changes in technology can only affect effectiveness. Remember that effect going into virtualization had an impact on intrusion detection systems a few years ago? In that case, a technology change (virtualization) reduced the ability of an existing control (IDS) to perform as expected.
This happens regularly and is currently a problem as we adopt machine learning, increase use of cloud services, move to serverless computing, and adopt containers.
There is also a natural erosion that is part and parcel of human nature. Consider budget allocation. An organization that has not fallen victim to a breach may look to overcome technology spending by dollars – or fail to invest in a way that keeps pace with the expansion of technology.
Its management may conclude that since there were no adverse effects of cuts in earlier years, the system should be able to cut more. Because the overall outcome is probability-based, this conclusion may be correct – even if the organization is gradually increasing the probability of something catastrophic happening.
The overall point here is that these shifts should be expected over time. However, anticipating changes – and building in instrumentation to learn about them – only separates the best programs from enough. So how can we build this level of understanding and future-building in our programs?
To begin with, there is no shortage of risk models and measurement approaches, systems security engineering capability models (like NIST SP800-160 and ISO / IEC 21827), maturity models and the like – but the one thing they all have in that system. Is establishing some mechanisms to be able to measure the overall impact on the organization based on specific controls within.
The lens you take – risk, efficiency / cost, capacity, etc. – is up to you, but at least the approach should be able to give you information to understand how well the specific elements work. You evaluate your schedule over time.
There are two sub-components here: first, the value provided by the overall program for overall control; And second, the extent to which changes in a given control affect it.
The first set of data is basically risk management – building an understanding of the value of each control so that you know what its overall value is for your program.