Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security

8/29/2017
07:16 PM
Curtis Franklin
Curtis Franklin
Curt Franklin
50%
50%

Automation Deserves Skepticism

While automation might be the next great tech wave, let's take some time to consider it.

"Garbage in, garbage out" is a maxim nearly as old as computers themselves. As automation becomes a greater factor in security, is it possible that we need to add "garbage in, security out" to the list of variants?

From the first recorded instance in 1963, garbage in, garbage out (or GIGO) has been a critical reminder that processing power is only as useful as the data that goes into the process. The best algorithms and programs will return useless information if they're fed bad data.

Bad data and the results that follow are hazardous enough when humans will read the information and perform additional analysis before acting; humans can (though the often don't) serve as a quality control agents for the process before things get wildly out of hand. In an automated system, though, the human QC agent is out of the loop and bad data can lead very quickly to bad action.

When it comes to security, automation is seen by many as the only rational path to meet future needs. The reasons are fairly straightforward; the number of attacks is going up as the volume of data in each attack also goes up. Add to that the rapid environmental changes that flow from virtualization, cloud computing and hybrid architectures, and you're at a situation where humans are simply too slow to keep up with all the activity.

The problem with relying on automation for enterprise security is that it means relying on massive amounts of data and complex algorithms to protect networks, compute assets and data. We rely on similar data sets and algorithms for many enterprise functions, but there is reason to be cautious when placing safety, economic health and corporate reputation in the hands of automated systems.

About a week ago a mathematician named Cathy O'Neil had a TED talk published. O'Neil is a frequent columnist for news organizations like Bloomberg and she is known for having a skeptical view of the way in which many organizations rely on data (especially big data) and algorithms. The title of her new book, Weapons of Math Destruction, says a lot about her attitude toward these tools.

Whether you agree with O'Neil or not, one of her major points is indisputable: If you're going to put your trust in an algorithm, you should fully understand the algorithm and thoroughly test the software that implements the algorithm. Next, you must insure that the data feeding the algorithms is meaningful and accurate. This is especially important when using big data as the foundation of security operations because it's entirely too easy to collect data that represents noise more than information.


You're invited to attend Light Reading's Virtualizing the Cable Architecture event – a free breakfast panel at SCTE/ISBE's Cable-Tec Expo on October 18 featuring Comcast's Rob Howald and Charter's John Dickinson.

There's no reason to completely avoid automation, but like any new application of technology it must be implemented with caution and care -- qualities that may or may not be abundant when cyber attacks are occurring all around you. Be careful out there -- whether fully automated or not.

— Curtis Franklin is the editor of SecurityNow.com. Follow him on Twitter @kg4gwa.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...