Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint

7/21/2015
10:30 AM
Simon Crosby
Simon Crosby
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

Time’s Running Out For The $76 Billion Detection Industry

The one strategy that can deliver the needle to the security team without the haystack is prevention.

Enterprises spend a mind-boggling $76 billion each year to “protect” themselves from cyber-attacks, but the bad guys keep winning because most protection solutions are based on detection instead of prevention. The 2015 Verizon Data Breach Investigation Report highlighted over 2,100 breaches and the FBI claims that every major U.S. company has been compromised by the Chinese – whether they realized it or not.

What’s wrong? The answer is the same today as it was in ancient Troy when the Greek army suddenly disappeared, leaving behind an innocent-looking horse that the Trojans willingly brought inside the gates. The enemy had changed shape, avoiding detection. And so it is today: Verizon found that 70- to 90 percent of the malware used in successful breaches last year was unique to the attacked organization. Today’s detection-centric tools mistakenly assume that malware, or techniques used in an attack, will be used elsewhere. We read the results in the press and it isn’t pretty.

Detection is a flawed protection strategy
Detection will fail – with certainty. The proof dates back to Turing’s work in 1936 on the Halting Problem and Alonzo Church’s work on undecidable problems, meaning it is impossible to determine if code is malicious or not with 100 percent certainty.

Some security vendors claim to have developed “advanced threat detection” or “new math” but this is deliberately misleading; they are secretly delighted with the status quo. That’s because detection serves their commercial goals to advance a narrative that organizations are pitted against sophisticated foes whose subterfuge demands continued diligence and adaptation. They use this to absolve themselves of responsibility when detection fails, and to bolster the marketing appeal of their “next gen” products. There is “always a way in” and “no silver bullet.” Homilies don’t help.

Absurdly enough, these same vendors debase the language of security, promising to stop breaches, and secure the enterprise – when they cannot. Others, focused on remediation and forensics, sell the equivalent of cyber indulgences to absolve these victims of the sin of poor security practices.

Detection fails in two ways – with unexpected consequences:

  • We all understand the obvious (and inevitable) consequence of failing to detect an actual attack - a “false negative” – that lets the bad guy in. An example is an IPS that cannot see inside encrypted TLS web traffic, given that more than 70 percent of attacks use TLS – as close an analogy to the Trojan Horse as one could want.
  • Another, more prevalent failure mode is just as bad: State of the art IPS systems bury “true positives” in a haystack of (up to 1,000 times as many) false alarms. A recent Ponemon study found security teams investigate only 4 percent of alerts. Security teams scurry about remediating non-attacked systems, losing focus and wasting enormous time and money, and in the fuss may fail to notice signs of an actual attack. Last year’s breach of Target is a good example because they did not respond to the alerts.

Detection is a failed detection strategy (sic)
Building a good detector requires careful tuning with real-world attacks. But in today’s cyber-scape polymorphic and crypted malware changes shape hourly. It is impossible to adapt a detector at the same speed. Stated mathematically:

“[For malware of size n bytes] …The challenge … is to model a space on the order of 28n to catch attacks hidden by polymorphism. To cover 30 byte [malware] decoders requires 2240 potential signatures. For comparison there exist an estimated 280 atoms in the universe.”

Vendors that claim that detection is a tool to find compromised systems to “reduce dwell time” find that their detection tools are as poor at identifying successful attacks as they are at stopping them.

Detection is a failed strategy
The only viable alternative to detection is to make systems “secure by design.” Network micro-segmentation would have easily defeated the Target attack. Micro-virtualization enables endpoints to hardware-isolate each task that processes untrusted content, defeating each attack automatically. An architecture that rigorously enforces the principle of least privilege is widely recognized in the domain of human security – for example in intelligence work, and more widely in society.

The only way to survive in an untrusted world is to enforce least privilege and to never trust the untrustworthy. Hardware isolation transforms security, and server hypervisors and clouds, and micro-virtualized endpoints can both secure themselves and ensure that there is never any need to trust a detector.

As it turns out, in the context of resilient, self-remediating endpoints, it is possible to eliminate false positives, identifying actual attacks with uncanny precision – in other words, to deliver the needle to the security team, without the haystack.

[Read an opposing view favoring detection over prevention by Josh Goldfarb in Detection: A Balanced Approach For Mitigating Risk.]

Simon Crosby is co-founder and CTO at Bromium. He was founder and CTO of XenSource prior to the acquisition of XenSource by Citrix, and then served as CTO of the Virtualization & Management Division at Citrix. Previously, Simon was a principal engineer at Intel where he led ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
KevinF351
50%
50%
KevinF351,
User Rank: Apprentice
8/6/2015 | 8:35:57 AM
Good apart from the advertising at the end
Some great ideas and commentary, shame it ended with such a blatant plug for his companies solution!
RyanSepe
50%
50%
RyanSepe,
User Rank: Ninja
7/31/2015 | 10:53:06 AM
Re: Superscript?
Yes agreed. Slight oversight. More people than atoms doesn't make sense.
B_SeeMore
50%
50%
B_SeeMore,
User Rank: Apprentice
7/23/2015 | 9:59:54 AM
Superscript?
Your intimidating statistic in the "atoms in the universe" quote is slightly less intimidating without the supercript to mark the exponents (280 atoms in the universe vs 280). ;P
suhasuseless
50%
50%
suhasuseless,
User Rank: Apprentice
7/22/2015 | 11:22:43 AM
good post
cool article..really cool
Deliver a Deadly Counterpunch to Ransomware Attacks: 4 Steps
Mathew Newfield, Chief Information Security Officer at Unisys,  12/10/2019
Intel's CPU Flaws Continue to Create Problems for the Tech Community
Irfan Ahmed, Assistant Professor in the Department of Computer Science at Virginia Commonwealth University,  12/10/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: Our Endpoint Protection system is a little outdated... 
Current Issue
The Year in Security: 2019
This Tech Digest provides a wrap up and overview of the year's top cybersecurity news stories. It was a year of new twists on old threats, with fears of another WannaCry-type worm and of a possible botnet army of Wi-Fi routers. But 2019 also underscored the risk of firmware and trusted security tools harboring dangerous holes that cybercriminals and nation-state hackers could readily abuse. Read more.
Flash Poll
Rethinking Enterprise Data Defense
Rethinking Enterprise Data Defense
Frustrated with recurring intrusions and breaches, cybersecurity professionals are questioning some of the industry’s conventional wisdom. Here’s a look at what they’re thinking about.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-16246
PUBLISHED: 2019-12-12
Intesync Solismed 3.3sp1 allows Local File Inclusion (LFI), a different vulnerability than CVE-2019-15931. This leads to unauthenticated code execution.
CVE-2019-17358
PUBLISHED: 2019-12-12
Cacti through 1.2.7 is affected by multiple instances of lib/functions.php unsafe deserialization of user-controlled data to populate arrays. An authenticated attacker could use this to influence object data values and control actions taken by Cacti or potentially cause memory corruption in the PHP ...
CVE-2019-17428
PUBLISHED: 2019-12-12
An issue was discovered in Intesync Solismed 3.3sp1. An flaw in the encryption implementation exists, allowing for all encrypted data stored within the database to be decrypted.
CVE-2019-18345
PUBLISHED: 2019-12-12
A reflected XSS issue was discovered in DAViCal through 1.1.8. It echoes the action parameter without encoding. If a user visits an attacker-supplied link, the attacker can view all data the attacked user can view, as well as perform all actions in the name of the user. If the user is an administrat...
CVE-2019-19198
PUBLISHED: 2019-12-12
The Scoutnet Kalender plugin 1.1.0 for WordPress allows XSS.