Vulnerabilities / Threats //

Vulnerability Management

5/15/2018
03:45 PM
Connect Directly
Twitter
Twitter
RSS
E-Mail
50%
50%

Don't Roll the Dice When Prioritizing Vulnerability Fixes

CVSS scores alone are ineffective risk predictors - modeling for likelihood of exploitation also needs to be taken into account.

The way that organizations today decide which software vulnerabilities to fix and which to ignore reduces risk no better than if they rolled dice to choose, according to a new study today from Kenna Security and Cyentia Institute. The report's authors argue that enterprises need to get smarter about how they prioritize flaws for remediation if they want to really make a dent in their risk exposure. 

The fact is, that organizations today are drowning in software vulnerabilities. A different report out today from Risk Based Security highlights this reality. It found that last quarter alone there were nearly 60 new vulnerabilities disclosed every single day. Among the 5,375 flaws published in the first 90 days of the year, approximately 18% had CVSS scores of 9.0 or higher. 

Those numbers in part demonstrate why some organizations can't fix every vulnerability in their environment - which means they must prioritize their efforts. The question is, what makes for a good prioritization system? 

Techniques like using CVSS vulnerability severity scores to guide vulnerability management activities have long been the stand-in methodologies. But those can't necessarily predict how likely attackers will be to actually exploit any given flaw in order to carry out an attack. And that's the real fly in the ointment, because according to the Kenna and Cyentia report, just 2% of published vulnerabilities have observed exploits in the wild. 

So, say an organization had the resources to miraculously fix 98% of the flaws in their environment; if they chose the wrong 2% to miss they still could be wide open to the full brunt of vulnerabilities attackers are actually targeting. And given the breach statistics against mature organizations that presumably use some standardized method of prioritization, one must question the efficacy of the same old, same old way of how flaws are picked for remediation.

"Security people know intuitively that what they've been doing historically is wrong, but they have no data-driven way to justify a change internally," says Michael Roytman, chief data scientist for Kenna. "That's what we hope this report provides people." 

Cyentia examined prioritization techniques statistically in terms of two big variables that were measured in light of whether exploits exist: coverage and efficiency.

Coverage measures how thoroughly organizations were able to fix flaws in their environment for which an exploit exists. If there are 100 vulnerabilities in an environment that have exploits and the organization only fixes 15 of them then the coverage of that prioritization is 15%. The leftover 85% is the organization's unremediated risk.

On the flip side, efficiency measures how effective the organization is in choosing vulnerabilities that are being exploited in practice by the bad guys. If the organization fixes 100 flaws but only 15 of them are being exploited, then that's a prioritization efficiency rating of 15%. The other 85% are those for which time might have been better spent doing something other than fixing them.  

"Ideally, we’d love a remediation strategy that achieves 100% coverage and 100% efficiency," the report explains. "But in reality, a direct trade-off exists between the two." 

So a strategy that goes after really bad vulnerabilities with scores of CVSS 10 or higher would have a good efficiency rating but is going to have terrible coverage. But the other mode of going after everything CVSS 6 and above means that efficiency is going to go through the floor because many of these will never be exploited. 

When measuring coverage and efficiency of  prioritization using simplistic remediation rules such as using CVSS scores, Cyentia found that the various choices tended to be no more better than choosing at random. It then analyzed coverage and efficiency using a more complex model that tries to predict which vulnerabilities are most likely to be exploited - using variables like whether the flaw includes key words like "remote code execution," predictive weighting of the vendor, CVSS score and the volume of community chatter around a given flaw in reference lists like Bugtraq. This kind of modeling was able to outperform historical rules with better coverage, twice the efficiency, and half the effort. 

"We'll never, of course, have perfect in vulnerability remediation. What we have to do is figure out where we are and then figure out how to get better," says Jay Jacobs, chief data scientist and founder of Cyentia. "Being exploit-driven, I think, is one of the better approaches."

Related Content:

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Ericka Chickowski
100%
0%
Ericka Chickowski,
User Rank: Moderator
5/16/2018 | 1:04:08 PM
Re: CVE, CVSS, Public Exploits
Thanks, for your input, Michael. I updated the story to clarify CVSS scores of 10 or higher and invited the Kenna Security folks to discuss the issue of percentage of vulns with public exploits.
danmellinger
50%
50%
danmellinger,
User Rank: Apprentice
5/16/2018 | 12:34:13 PM
Re: CVE, CVSS, Public Exploits
Appreciate the comment. The 2% refers to "observed exploitation events" related to those vulnerabilities. It appears that you're referring to "has an exploit published." (additional background can be found in the Data Sources section of the full report)

The report found that 22.4% of all 94,597 CVE's we looked at (1999-2017) have published exploit code. The difference of that percentage from the Risk Based Security report (31.4%) is possibly due to the difference between our timeframe versus looking at vulns in 2017, which is an interesting insight that there is an increased percentage of published exploits in 2017 versus the historical average. In any case both of the reports came to the same conclusion that the majority of vulnerabilities do not have published exploit code. 

For the second part of your comment, it appears that it may be a typo, referring to CVE instead of CVSS, as CVSS is correctly called out further down in that paragraph. 
MichaelM17101
100%
0%
MichaelM17101,
User Rank: Strategist
5/16/2018 | 1:55:10 AM
CVE, CVSS, Public Exploits
Just to correct some facts. It is not true when Kenna and Cyentia report, just 2% of published vulnerabilities have observed exploits in the wild.  If we look a 2017 from January 1st to December 31st there where discovered 21,691 vulnerabilites. Out of them where there Public Exploits avaialble for 6,818 = 31,4% - source: Risk Based Security's VulnDB. 

Also, it is mentioned that: "So a strategy that goes after really bad CVEs of 10 or higher would have a good efficiency rating but is going to have terrible coverage"  CVE is not used to mesusre the criticality. That is CVSS, and CVSS ends as 10. Every CVSS Score between 7 - 10 is considered critical. 

 

 
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
The Year in Security 2018
This Dark Reading Tech Digest explores the biggest news stories of 2018 that shaped the cybersecurity landscape.
Flash Poll
The State of Ransomware
The State of Ransomware
Ransomware has become one of the most prevalent new cybersecurity threats faced by today's enterprises. This new report from Dark Reading includes feedback from IT and IT security professionals about their organization's ransomware experiences, defense plans, and malware challenges. Find out what they had to say!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-6345
PUBLISHED: 2019-01-15
The function number_format is vulnerable to a heap overflow issue when its second argument ($dec_points) is excessively large. The internal implementation of the function will cause a string to be created with an invalid length, which can then interact poorly with other functions. This affects all s...
CVE-2018-7603
PUBLISHED: 2019-01-15
In Drupal's 3rd party module search auto complete prior to versions 7.x-4.8 there is a Cross Site Scripting vulnerability. This Search Autocomplete module enables you to autocomplete textfield using data from your website (nodes, comments, etc.). The module doesn't sufficiently filter user-entered t...
CVE-2019-3554
PUBLISHED: 2019-01-15
Wangle's AcceptRoutingHandler incorrectly casts a socket when accepting a TLS 1.3 connection, leading to a potential denial of service attack against systems accepting such connections. This affects versions of Wangle prior to v2019.01.14.00
CVE-2019-3557
PUBLISHED: 2019-01-15
The implementations of streams for bz2 and php://output improperly implemented their readImpl functions, returning -1 consistently. This behavior caused some stream functions, such as stream_get_line, to trigger an out-of-bounds read when operating on such malformed streams. The implementations were...
CVE-2019-0030
PUBLISHED: 2019-01-15
Juniper ATP uses DES and a hardcoded salt for password hashing, allowing for trivial de-hashing of the password file contents. This issue affects Juniper ATP 5.0 versions prior to 5.0.3.