Vulnerabilities / Threats //

Vulnerability Management

5/15/2018
03:45 PM
Connect Directly
Twitter
Twitter
RSS
E-Mail
50%
50%

Don't Roll the Dice When Prioritizing Vulnerability Fixes

CVSS scores alone are ineffective risk predictors - modeling for likelihood of exploitation also needs to be taken into account.

The way that organizations today decide which software vulnerabilities to fix and which to ignore reduces risk no better than if they rolled dice to choose, according to a new study today from Kenna Security and Cyentia Institute. The report's authors argue that enterprises need to get smarter about how they prioritize flaws for remediation if they want to really make a dent in their risk exposure. 

The fact is, that organizations today are drowning in software vulnerabilities. A different report out today from Risk Based Security highlights this reality. It found that last quarter alone there were nearly 60 new vulnerabilities disclosed every single day. Among the 5,375 flaws published in the first 90 days of the year, approximately 18% had CVSS scores of 9.0 or higher. 

Those numbers in part demonstrate why some organizations can't fix every vulnerability in their environment - which means they must prioritize their efforts. The question is, what makes for a good prioritization system? 

Techniques like using CVSS vulnerability severity scores to guide vulnerability management activities have long been the stand-in methodologies. But those can't necessarily predict how likely attackers will be to actually exploit any given flaw in order to carry out an attack. And that's the real fly in the ointment, because according to the Kenna and Cyentia report, just 2% of published vulnerabilities have observed exploits in the wild. 

So, say an organization had the resources to miraculously fix 98% of the flaws in their environment; if they chose the wrong 2% to miss they still could be wide open to the full brunt of vulnerabilities attackers are actually targeting. And given the breach statistics against mature organizations that presumably use some standardized method of prioritization, one must question the efficacy of the same old, same old way of how flaws are picked for remediation.

"Security people know intuitively that what they've been doing historically is wrong, but they have no data-driven way to justify a change internally," says Michael Roytman, chief data scientist for Kenna. "That's what we hope this report provides people." 

Cyentia examined prioritization techniques statistically in terms of two big variables that were measured in light of whether exploits exist: coverage and efficiency.

Coverage measures how thoroughly organizations were able to fix flaws in their environment for which an exploit exists. If there are 100 vulnerabilities in an environment that have exploits and the organization only fixes 15 of them then the coverage of that prioritization is 15%. The leftover 85% is the organization's unremediated risk.

On the flip side, efficiency measures how effective the organization is in choosing vulnerabilities that are being exploited in practice by the bad guys. If the organization fixes 100 flaws but only 15 of them are being exploited, then that's a prioritization efficiency rating of 15%. The other 85% are those for which time might have been better spent doing something other than fixing them.  

"Ideally, we’d love a remediation strategy that achieves 100% coverage and 100% efficiency," the report explains. "But in reality, a direct trade-off exists between the two." 

So a strategy that goes after really bad vulnerabilities with scores of CVSS 10 or higher would have a good efficiency rating but is going to have terrible coverage. But the other mode of going after everything CVSS 6 and above means that efficiency is going to go through the floor because many of these will never be exploited. 

When measuring coverage and efficiency of  prioritization using simplistic remediation rules such as using CVSS scores, Cyentia found that the various choices tended to be no more better than choosing at random. It then analyzed coverage and efficiency using a more complex model that tries to predict which vulnerabilities are most likely to be exploited - using variables like whether the flaw includes key words like "remote code execution," predictive weighting of the vendor, CVSS score and the volume of community chatter around a given flaw in reference lists like Bugtraq. This kind of modeling was able to outperform historical rules with better coverage, twice the efficiency, and half the effort. 

"We'll never, of course, have perfect in vulnerability remediation. What we have to do is figure out where we are and then figure out how to get better," says Jay Jacobs, chief data scientist and founder of Cyentia. "Being exploit-driven, I think, is one of the better approaches."

Related Content:

Ericka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading.  View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Ericka Chickowski
100%
0%
Ericka Chickowski,
User Rank: Moderator
5/16/2018 | 1:04:08 PM
Re: CVE, CVSS, Public Exploits
Thanks, for your input, Michael. I updated the story to clarify CVSS scores of 10 or higher and invited the Kenna Security folks to discuss the issue of percentage of vulns with public exploits.
danmellinger
50%
50%
danmellinger,
User Rank: Apprentice
5/16/2018 | 12:34:13 PM
Re: CVE, CVSS, Public Exploits
Appreciate the comment. The 2% refers to "observed exploitation events" related to those vulnerabilities. It appears that you're referring to "has an exploit published." (additional background can be found in the Data Sources section of the full report)

The report found that 22.4% of all 94,597 CVE's we looked at (1999-2017) have published exploit code. The difference of that percentage from the Risk Based Security report (31.4%) is possibly due to the difference between our timeframe versus looking at vulns in 2017, which is an interesting insight that there is an increased percentage of published exploits in 2017 versus the historical average. In any case both of the reports came to the same conclusion that the majority of vulnerabilities do not have published exploit code. 

For the second part of your comment, it appears that it may be a typo, referring to CVE instead of CVSS, as CVSS is correctly called out further down in that paragraph. 
MichaelM17101
100%
0%
MichaelM17101,
User Rank: Strategist
5/16/2018 | 1:55:10 AM
CVE, CVSS, Public Exploits
Just to correct some facts. It is not true when Kenna and Cyentia report, just 2% of published vulnerabilities have observed exploits in the wild.  If we look a 2017 from January 1st to December 31st there where discovered 21,691 vulnerabilites. Out of them where there Public Exploits avaialble for 6,818 = 31,4% - source: Risk Based Security's VulnDB. 

Also, it is mentioned that: "So a strategy that goes after really bad CVEs of 10 or higher would have a good efficiency rating but is going to have terrible coverage"  CVE is not used to mesusre the criticality. That is CVSS, and CVSS ends as 10. Every CVSS Score between 7 - 10 is considered critical. 

 

 
Microsoft President: Governments Must Cooperate on Cybersecurity
Kelly Sheridan, Staff Editor, Dark Reading,  11/8/2018
Why the CISSP Remains Relevant to Cybersecurity After 28 Years
Steven Paul Romero, SANS Instructor and Sr. SCADA Network Engineer, Chevron,  11/6/2018
5 Reasons Why Threat Intelligence Doesn't Work
Jonathan Zhang, CEO/Founder of WhoisXML API and TIP,  11/7/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Flash Poll
The State of Ransomware
The State of Ransomware
Ransomware has become one of the most prevalent new cybersecurity threats faced by today's enterprises. This new report from Dark Reading includes feedback from IT and IT security professionals about their organization's ransomware experiences, defense plans, and malware challenges. Find out what they had to say!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-16850
PUBLISHED: 2018-11-13
postgresql before versions 11.1, 10.6 is vulnerable to a to SQL injection in pg_upgrade and pg_dump via CREATE TRIGGER ... REFERENCING. Using a purpose-crafted trigger definition, an attacker can cause arbitrary SQL statements to run, with superuser privileges.
CVE-2018-17187
PUBLISHED: 2018-11-13
The Apache Qpid Proton-J transport includes an optional wrapper layer to perform TLS, enabled by use of the 'transport.ssl(...)' methods. Unless a verification mode was explicitly configured, client and server modes previously defaulted as documented to not verifying a peer certificate, with options...
CVE-2018-1792
PUBLISHED: 2018-11-13
IBM WebSphere MQ 8.0.0.0 through 8.0.0.10, 9.0.0.0 through 9.0.0.5, 9.0.1 through 9.0.5, and 9.1.0.0 could allow a local user to inject code that could be executed with root privileges. IBM X-Force ID: 148947.
CVE-2018-1808
PUBLISHED: 2018-11-13
IBM WebSphere Commerce 9.0.0.0 through 9.0.0.6 could allow some server-side code injection due to inadequate input control. IBM X-Force ID: 149828.
CVE-2018-15452
PUBLISHED: 2018-11-13
A vulnerability in the DLL loading component of Cisco Advanced Malware Protection (AMP) for Endpoints on Windows could allow an authenticated, local attacker to disable system scanning services or take other actions to prevent detection of unauthorized intrusions. To exploit this vulnerability, the ...