Operations

6/20/2018
10:00 AM
Kelly Sheridan
Kelly Sheridan
Slideshows
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

The Best and Worst Tasks for Security Automation

As with all new tech, there are good times and and bad times to use it. Security experts share which tasks to prioritize for automation.
Previous
1 of 7
Next

(Image: Zapp2Photo via Shutterstock)

(Image: Zapp2Photo via Shutterstock)

Automation may change the nature of security jobs, but it won't be taking them away anytime soon. While great for some tasks as more of a supplementary tool, other tasks are still best left fully for people.  

Where you decide to automate depends on where the benefits outweigh the risks, says Rob Boyce, managing director at Accenture Security. And the level of risk you encounter depends on how you approach the process and which tasks you choose to automate.

While automation tools have come a long way, there's still room for improvement, he adds. Decisions remain about how it should evolve and where it fits in the business. In its current form, the tech works well for simple tasks but hasn't advanced to address complex ones.

In addition, the machine-learning algorithms powering automation are still imperfect. "Machine learning isn't always a yes or a no," says Corey Nachreiner, CTO at WatchGuard Technologies. Oftentimes people still need to analyze results to determine what they mean.

Automation, Boyce says, requires a thoughtful approach. Here, Boyce and Nachreiner weigh in on which security tasks can be prioritized for automation, and those where it doesn't quite work. Where have you implemented automation so far, and where have you found it most effective? Feel free to share your story in the comments.

Why Cybercriminals Attack: A DARK READING VIRTUAL EVENT Wednesday, June 27. Industry experts will offer a range of information and insight on who the bad guys are – and why they might be targeting your enterprise. Go here for more information on this free event.

 

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial ... View Full Bio

Previous
1 of 7
Next
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
EdwardThirlwall
50%
50%
EdwardThirlwall,
User Rank: Apprentice
7/16/2018 | 3:33:18 AM
Re: Don't Automate: Penetration Testing (this is where I disagree)
To ensure that we indeed have a good security automation setup put in place, it is best to follow these guidelines from the experts. We can never be too sure when it comes to security and it definitely would not pay should there be breaches in the future just because of a failure to keep up with the system. We need to be on our guard at all times even if we have a good system already implemented. It does not pay to be extra careful than to pay the price when problems arise in the future.
No SOPA
50%
50%
No SOPA,
User Rank: Ninja
7/8/2018 | 9:53:21 PM
Re: Don't Automate: Penetration Testing (this is where I disagree)
"I think you missed this point in the overall online discussion. I do do believe that human intervention is needed but at some point, the synergistic nature of machine and man will help improve the overall process. You are still tied into in how things done now, what I am referencing is making a paradigm shift (this will take time and effort on both parts) to create an environment where we teach the machine to learn and address daily everyday issues (the prcoess is in its infancy stages, where there is a question about how to proceed, then the human should update or provide instructions as to how we need to handle the issue)." 

OK, I'm on the same page as you now, Todd.  I'm still not sure we can ever get there, but I'd love to see some of your energy make its way into a white paper!  I believe there is a need for humans in the pen-testing life-cycle that can never, and I mean never, be replaced by automation.  A lot of your ideas as to what can someday be taken over by automation don't jive for me but I love to see attempts at it, nonetheless.  I have a certain belief in the place AI and machine learning have in tech, and certain areas of "hacking" as we humans know it is a human exercise.  Anything AI and machine learning accomplishes in terms of pen-testing is not going to be the same; humans will continue to do it better, and do it in ways that predictive models and "intelligent" machines can't model.  My uneducated opinion only, straight from the trenches.       
tdsan
50%
50%
tdsan,
User Rank: Apprentice
7/8/2018 | 7:38:06 PM
Re: Don't Automate: Penetration Testing (this is where I disagree)
In addition, if it was a false positive, it would be up to us to update the process so to address the anomaly in a more efficient manner (the end-user could identify if this was an anomaly that needed to be reviewed expeditiously or if the methods of remediation were correct).

I think you missed this point in the overall online discussion. I do do believe that human intervention is needed but at some point, the synergistic nature of machine and man will help improve the overall process. You are still tied into in how things done now, what I am referencing is making a paradigm shift (this will take time and effort on both parts) to create an environment where we teach the machine to learn and address daily everyday issues (the prcoess is in its infancy stages, where there is a question about how to proceed, then the human should update or provide instructions as to how we need to handle the issue). So there is teaching that needs to take place, we need to look into the future and stop relying soley on current methods (again it is going to take time), because the methods we are using are not working across the board or at least by what I have found (when a group wants something bad enough, there is nothing we can do to stop is, it is invetible). There is and should be a better way.

In addition, all the steps you listed below, that can be automated. Just like I listed the steps for thwarting a potential attack, why can't the machine do it (again, I am not saying now), but in the near future. The machine should baseline the environment, identify normal processing levels (there are times in which there will need to be intervention) but if it can identify something that is considered an anomaly, report on that anomaly, identify the systems or applications that are being affected (even if it is a zero-day, test a possible solution in its isolated environment, then provide that solution to the end-user, that would pay huge dividends (because the machine has learned and created a baselined map of the environment), then it would be easy to identify an attack because something new has been introduced to your environment, the same way the body identifies a problem, it communicates with all endpoints (nerves) and reports back to centralized center (brain), then sentries (white blood cells) are sent to analyze the problem, hence the term neural network.

T
No SOPA
50%
50%
No SOPA,
User Rank: Ninja
7/8/2018 | 3:36:02 AM
Re: Don't Automate: Penetration Testing (this is where I disagree)
While I appreciate your argument for why to automate pen-testing, Todd, I do have to say that it must still be partnered with manual pen-testing expertise. There is sometimes a perception of a computer humming away while a variety of automated intrusion functions poke at an app or website held by folks who either haven't done the grunt work, or are looking for a reason to let go of the labor associated with it. However, a good example why manual pen-testing will always be needed is 0-day vulnerabilities. While you could potentially have an automated pen-testing system that is tied in to the many 0-day databases out there and monitors for new vulnerabilities, the level of sophistication and adaptive AI required to write new code and attempt exploiting these 0-days is at a level few companies could afford in a software package, let alone an automation company afford to develop for its automated pen-testing suite. You are talking Fortune 500 and military grade software at a minimum and even with today's tech, such a system is still not going to be seen generally available and operating at a level of tolerable faults for some time.


I feel much of what you note as the benefits of going with automation for pen-testing are either an approach for design for a future system, or current benefits of pen-testing in areas of low-change intrusion, such as taking the results of manual pen-testing and building an automation suite to test in areas that are actually not changing such that new code would need to be written, or some level of human analysis and creative thinking is required. You could almost see this as a sort of regression test suite where once the flaw is identified in one code base, you automate testing against future code bases to validate the flaw is closed, or you turn the tests on other similar code bases you expect have the flaw to demonstrate it. It's hard for automated software to tests resolutions when it first has to read and understand a 0-day report, write code to test the attack, then write code to test a resolution, or patch, to fix the flaw.

     

 
tdsan
50%
50%
tdsan,
User Rank: Apprentice
7/7/2018 | 4:16:40 PM
Don't Automate: Penetration Testing (this is where I disagree)
Pen Testing should be automated because the purpose of a PenTest is to identify the vulnerability and to make it known to the end-user so some proactive measure can be taken or done (remediation). The problem I have with the comment is that everyone is aware that there is no silver-bullet to address all of the areas of security, but I do think automation would help to reduce the threat matrix because systems would start learning from one another. Yes, I do believe that human intervention is needed (only in certain cases, one-offs) to be part of the process but this can be improved overtime where the machine understands that certain processes we kick off are not intended to cause harm, this is the learning process. As noted, machines will miss certain things but that is where the learning comes into play along with system updates. Remember, we (humans & machine) will miss somethings but by creating a mature model where systems can learning in cyber-security arena, we can reduce the number of false-positives to a minimum.

To address Boyce's or the moderator's concern, we have something called "Continuous Monitoring" or SIEM where an ongoing analysis of the various sessions and potential threats are steadily being monitored. The problem is what happens when that person gets tired or is not at the office (late night, a skeleton crew is not as talented as the group that is there during the day). There needs to be some intelligence in the decision-making process where the threats are prioritized based on their level of priority. In addition, the system needs to be able to adjust (sliding scale) on the fly where if there are threats that are not as nefarious as the prior or post threat, then the system needs to be able to adjust that sliding scale and move that threat up the ladder of discernment and priority. Then from that classification, the system needs to be able to pull data from external sources where potential resolutions could be matched against the potential threat. Finally, the system needs to be associated with a percentage based on the level of accuracy to resolve the issue (100%, 80%, 70% resolution scale) where the system is able to learn based on recreating the vulnerability in a virtual mock environment.

The solution could be validated in seconds by the machine learning process (whereby the system actually learns how to mitigate the problem based on external repositories or tactics it has learned on its on by breaking down, analyzing, resolving and then reporting on the threat that it found to be an anomaly in the scheme of things (i.e. a legitimate file was written to by a variant where the variant injected code into the registry, kernel or system file, the system was able to identify what was written to the file and remove the code that was written to the file or replaces the system file with a baseline file that was validated as having the correct MD5 checksum), this would be perceived as a win-win for all parties involved (except for the actor who was trying to access the environment in the first place).

It would take a human, days to figure out the type of attack, what was changed, what was affected, the remediation techniques and report on the incident where the machine could take minutes or seconds to do (there will be some tweaking but not only could we reduce the number of possible security vulerabilities but system crashes and application failures as well).

If we were able to reduce both then the attack vectors and threat landscape, this would improve the overall process 4 fold because the machines would be able to point out the contention, identify the resolution, test the resolution, implement the resolution and report on the solution, this would prove extremely beneficial to allow the end-user to concentrate on other tasks. In addition, if it was a false positive, it would be up to us to update the process so to address the anomaly in a more efficient manner (the end-user could identify if this was an anomaly that needed to be reviewed expeditiously or if the methods of remediation were correct). This process could be shared by other machines where the machines learn from one another. Moreover, we could also develop a process where machines provided health information - Security or functional processing levels - then we could improve every aspect of the computing process.

Todd
White House Cybersecurity Strategy at a Crossroads
Kelly Jackson Higgins, Executive Editor at Dark Reading,  7/17/2018
The Fundamental Flaw in Security Awareness Programs
Ira Winkler, CISSP, President, Secure Mentem,  7/19/2018
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
Flash Poll
The State of IT and Cybersecurity
The State of IT and Cybersecurity
IT and security are often viewed as different disciplines - and different departments. Find out what our survey data revealed, read the report today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2018-3770
PUBLISHED: 2018-07-20
A path traversal exists in markdown-pdf version <9.0.0 that allows a user to insert a malicious html code that can result in reading the local files.
CVE-2018-3771
PUBLISHED: 2018-07-20
An XSS in statics-server <= 0.0.9 can be used via injected iframe in the filename when statics-server displays directory index in the browser.
CVE-2018-5065
PUBLISHED: 2018-07-20
Adobe Acrobat and Reader 2018.011.20040 and earlier, 2017.011.30080 and earlier, and 2015.006.30418 and earlier versions have a Use-after-free vulnerability. Successful exploitation could lead to arbitrary code execution in the context of the current user.
CVE-2018-5066
PUBLISHED: 2018-07-20
Adobe Acrobat and Reader 2018.011.20040 and earlier, 2017.011.30080 and earlier, and 2015.006.30418 and earlier versions have an Out-of-bounds read vulnerability. Successful exploitation could lead to information disclosure.
CVE-2018-5067
PUBLISHED: 2018-07-20
Adobe Acrobat and Reader 2018.011.20040 and earlier, 2017.011.30080 and earlier, and 2015.006.30418 and earlier versions have a Heap Overflow vulnerability. Successful exploitation could lead to arbitrary code execution in the context of the current user.