Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Risk

11/13/2019
10:00 AM
Jack Freund
Jack Freund
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
50%
50%

Unreasonable Security Best Practices vs. Good Risk Management

Perfection is impossible, and pretending otherwise just makes things worse. Instead, make risk-based decisions.

Years ago, I spoke with the risk management leader at a bank where I was consulting. This person was new in the role and was outlining plans for implementing an IT risk management program. The company's program was to be based on the NIST 800 series, which predates the creation of NIST Cybersecurity Framework, and they had worked out their own proprietary risk rating system based on the control catalog in SP 800-53. It was well thought out and the leader had some success in a previous role working with the same solution.

Ultimately, the risk ratings assigned as a result of this process came down to the personal opinion of the assessors. But the real trouble with this approach was that the security leader held the viewpoint that, eventually, the process would result in all of the controls in NIST SP 800-53 being implemented. As a result, the model they developed was designed to give good risk ratings when more controls were implemented and bad ratings when those controls were missing.

This person is not alone in the belief that more controls equal less risk. Far too many risk registers are truly just lists of broken or missing things. So sure are we in the belief that we need more security that we tend to believe that only perfection will do. Security conferences are rife with these axioms, such as "we need to get it right every time; hackers only need to get it right once." Such views are pessimistic and dissuade business leaders from taking the actions they need to properly secure themselves. Why should they bother if they can't get it perfect?

I often say that we need cybersecurity professionals doing blocking and tackling who believe they can stop 100% of the things trying to break in. I think that mindset is important for high-quality threat management and security operations. However, I know they will eventually fail. This doesn't mean that their efforts are pointless. Indeed, what we must celebrate are the small wins and consistent behaviors, not perfection.

Control frameworks aren't to blame; they are simply cataloging the world of possibilities. The blame falls to broken risk models that leverage a "gotta catch 'em all" approach to security controls. This approach pretends there is a linear relationship between security controls and loss exposure. This ignores critical variables such as frequency of attack, attacker capability, and an organization's tolerance for loss.

Such "collector" approaches to risk management find their way into auditing frameworks that so often purport to be risk-based but instead treat every missing or deficient thing as the risk itself. This approach has allowed risk statements expressing zero appetite to make their way to senior executives and corporate boards. Well-meaning risk appetite statements such as "we don't accept any cyber-related risk" are virtually impossible to put into action in organizations with limited budgets (and all are limited). Accepting zero risk means that you would spend every dollar an organization has to avoid a loss, and even then, no one can guarantee a future with zero incidents.

A mature way to talk about cyber-risk appetite is using some non-zero loss amount as a guide. Statements about risk and loss should focus on the range of the amounts that could be lost and the timelines over which such a loss could occur. These ranges are necessary because we're discussing future events that may or may not come to pass, and, as such, any specific measures that may be made about appetite are going to be wrong.

The Goal of Effective Risk Management
Effective risk management enables an organization to attain an acceptable amount of loss over time with the least amount of capital expenditure. In other words, we're trying to balance money spent today to reduce risk against the probability of some amount of loss at a future time. Nowhere in good risk management is the notion of perfect risk avoidance. Such a focus on risk would choke off innovation and good business management.

First, every dollar spent on risk reduction cannot be spent on the mission of the organization. As a result, risk reduction investments necessarily mean mission curtailment. Second, without the right amount of freedom to operate without safeguards in place, business innovation is also curtailed.

Navigating Risk
Having a good model that represents the nature of risk accurately is important if you intend to navigate risk and approach risk elimination through a security controls process. Further, such a model should support the modern needs of organizations, such as the purchase of cyber insurance and/or setting aside money for risk allocation (risk-based capital). The FAIR Institute was established to promote the open source FAIR standard for cyber-risk quantification. The FAIR model lets you scope and model risk scenarios in a way that is meaningful to the leaders of that organization. It ties things like missing controls and audit findings to statements of loss that allow decision-makers to make well-informed and risk-aware decisions.

Further, it gives companies the opportunity to express those cyber loss scenarios to which they are exposed in terms that are meaningful and actionable: economic impact. For example, FAIR lets an organization express why a control from that voluminous catalog is meaningful by linking it to the company's potential for loss, impact to customers, and/or its implications to insurance and risk-based capital. In other words, this links technology failures to business impacts. FAIR also enables practitioners to demonstrate how implementing a solution will reduce risk by expressing it in terms of a risk efficacy ratio: a dollar invested in this solution reduces future loss potential by "x" amount.

Beware the allure of "best practice" models when assessing your organization's risk posture. If that model encourages you to get an A+ on a controls implementation test, you're signing your company up for an overcontrolled environment that is choking off innovation and leaching off its business plan. Instead, focus on risk navigation: Provide decision-makers with the information they need to make truly risk-informed decisions and accept that the perfect solution to an organization's cybersecurity problems may be imperfectly implemented security.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "Account Fraud Harder to Detect as Criminals Move from Bots to 'Sweat Shops'."

Dr. Jack Freund is the Risk Science Director for RiskLens, a cyber-risk quantification platform built on FAIR. Over the course of his 20-year career in technology and risk,  Freund has become a leading voice in cyber-risk measurement and management. He previously worked ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
TannerJ
100%
0%
TannerJ,
User Rank: Author
11/13/2019 | 10:30:46 AM
Education is Vital
Great article, and it highlights the importance of developing proactive practices.  For security to be comprehensive, it cannot be isolated to one segment of an organization.  Cybersecurity is the responsibility of all members of an organization.
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: Our Endpoint Protection system is a little outdated... 
Current Issue
The Year in Security: 2019
This Tech Digest provides a wrap up and overview of the year's top cybersecurity news stories. It was a year of new twists on old threats, with fears of another WannaCry-type worm and of a possible botnet army of Wi-Fi routers. But 2019 also underscored the risk of firmware and trusted security tools harboring dangerous holes that cybercriminals and nation-state hackers could readily abuse. Read more.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industrys conventional wisdom. Heres a look at what theyre thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-0404
PUBLISHED: 2019-12-11
SAP Enable Now, before version 1911, leaks information about network configuration in the server error messages, leading to Information Disclosure.
CVE-2019-0405
PUBLISHED: 2019-12-11
SAP Enable Now, before version 1911, leaks information about the existence of a particular user which can be used to construct a list of users, leading to a user enumeration vulnerability and Information Disclosure.
CVE-2019-0395
PUBLISHED: 2019-12-11
SAP BusinessObjects Business Intelligence Platform (Fiori BI Launchpad), before version 4.2, allows execution of JavaScript in a text module in Fiori BI Launchpad, leading to Stored Cross Site Scripting vulnerability.
CVE-2019-0398
PUBLISHED: 2019-12-11
Due to insufficient CSRF protection, SAP BusinessObjects Business Intelligence Platform (Monitoring Application), before versions 4.1, 4.2 and 4.3, may lead to an authenticated user to send unintended request to the web server, leading to Cross Site Request Forgery.
CVE-2019-0399
PUBLISHED: 2019-12-11
SAP Portfolio and Project Management, before versions S4CORE 102, 103, EPPM 100 and CPRXRPM 500_702, 600_740, 610_740; unintentionally allows a user to discover accounting information of the Projects in Project dashboard, leading to Information Disclosure.