Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Cloud

4/3/2020
10:00 AM
Chris Calvert
Chris Calvert
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Want to Improve Cloud Security? It Starts with Logging

Remedying the "garbage in, garbage out" problem requires an understanding of what is causing the problem in the first place.

When using event logs to monitor for security violations and incidents, the quality of output is determined by the quality of the input. Much of the logging being used is subpar, and there has been little industry incentive to fix it. This, in turn, is preventing true cloud security because cloud platform logs don't contain useful information.

It doesn't have to be this way. Remedying the "garbage in, garbage out" problem is possible, but it requires an understanding of what is causing the problem in the first place.

The Hidden Problem
Enterprises collect and log data from a variety of IT and security devices. Most software applications and systems produce log files, which include all of the automatically produced and time-stamped documentation of recent system activity. Enterprises then use this data to monitor, protect, understand, and manage their technology-enabled businesses. The data sources have the potential to provide visibility across many different technical perspectives: network, security, application, host-based, cloud, and contextual telemetries, to start. Within each of these categories are infrastructure devices that produce logs and sensors focused on monitoring, be it for IT operations or security.

This log data is the unassuming superhero of IT data. It may not look like much initially, but it has much power. But not all logging is created equal. Much of the data can be inconsistent, hard to understand, and focused only on system or application break/fix. There is little information that could indicate something was potentially malicious.

And if the logging you're using is poor, it's going to have a negative effect on all of your other security measures. While there are common logging formats and methods, there's little agreement or commonality in how the information contained should be used for specific purposes. As a result, logs can be hard to leverage effectively for any purpose.

Compounding this, the law of inertia is at play with respect to how logging currently works. Essentially, there's a pervasive attitude of "it's not broke, don't fix it" – even though, in reality, the way logging is done actually is broken. Security teams are motivated toward change, but how much leverage does a five-person security team have on the likes of Microsoft to produce better logs for Active Directory or Office 365?

This doesn't mean change is impossible. Rather, the leverage for change starts with the procurement department. In other words, market change is driven by purchasing behavior. So it's up to us to economically influence improvements in log quality. This should become a point of competitive selection.

Best Practices and What to Ask of Your Vendor
So how do you ensure you're getting the most out of your logging? There are a few best practices to follow. These include:

  • Focus on the logs that do contain judgment about the potential of malicious activity or find ways to inject that judgment onto the logs with context and security expertise.
  • Only log from sources that matter and collect the information that can support detection and remediation effectively
  • Dial up your security-specific sensors to 11. Make them as loud as possible to give you the most comprehensive visibility – and demand that your vendors produce better security visibility into what's going on with their products.

When it comes to assessing vendors for your logging, it's important to ask some key questions about their solutions and how they work before committing to one. They should be able to answer the following questions for you:

  • What are the most common attack vectors against your product?
  • How many malicious scenarios will your logs identify?
  • How hard is it to parse them so they're normalized with other similar technologies?
  • What's the most effective way to monitor your logging for malicious activity?
  • Do you have a center of excellence for security monitoring of your technology?
  • Which standard log formats are used for output?
  • How hard it is it to introduce new logs based on new attack techniques? What's your update frequency?

If a potential vendor can't or won't explain these things things to you fully, it's time to keep searching for one that can and will.

Making Change Worthwhile
"Garbage in, garbage out" is clearly not a good way to approach a security monitoring strategy. It puts organizations at risk and sucks up too much budget and staff time to be sustainable or tenable. Rather, organizations must incentivize vendors to develop better logging capabilities by making it a purchasing sticking point. Use the best practices and questions above to get the logging capabilities your organization needs to keep your network safe.

Related Articles:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's featured story: "What Should I Do If Someone Is Impersonating My Company in a Phishing Campaign?"

Chris Calvert has over 30 years of experience in defensive information security: 14 years in the defense and intelligence community and 17 years in the commercial industry. He has worked on the Defense Department Joint Staff and held leadership positions in both large and ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
tdsan
50%
50%
tdsan,
User Rank: Ninja
5/11/2020 | 3:12:42 PM
Re: You hit it right on, good points
I do agree with this, we have SIEM devices to filter through mounds of this information but I don't think it is enough especially when those attacks are coming from around the world.

 

1) Too many logs are collected and there isn't AI correlation to make sense of it efficiently so it becomes noise.
—RyanSepe, Dark Reading User

2) Logging isn't set up in the right places so when an event does happen the security unit is blind to it.
—RyanSepe, Dark Reading User

I am in agreement as well on the second comment, I do agree there is not or may not be a checklist put in place (Puppet, Chef or Ansible) to validate a system is sending its logs to a datastore for performing some sort of prescriptive analytics (taking the data and doing something about it). I had mentioned that earlier, it seems that processes are being skipped and not followed to the letter, that is why we need an automated process that checks network devices and verifies specific requirements before being put on the network, NAC device would help.

Todd
RyanSepe
50%
50%
RyanSepe,
User Rank: Ninja
4/30/2020 | 10:25:05 PM
Re: You hit it right on
Unfortunately, I think a lot of the times logging is used more for a compliance checkbox or to troubleshoot an issue and less to support the security objective.

I've noticed two commonalities:

1) Too many logs are collected and there isn't AI correlation to make sense of it efficiently so it becomes noise.

or 

2) Logging isn't set up in the right places so when an event does happen the security unit is blind to it.
tdsan
50%
50%
tdsan,
User Rank: Ninja
4/30/2020 | 1:30:54 PM
You hit it right on
Much of the logging being used is subpar, and there has been little industry incentive to fix it.
  • I wouldn't say that, the Industry has put together numerous tools like SIEM, NAC, SELinux, AWS CloudWatch works pretty nicely, it is not the tool but the amount of information that is being processed that is the problem.

And if the logging you're using is poor, it's going to have a negative effect on all of your other security measures.
  • Most application produce log data, so not all of the data is going to be bad, I look at it from a functional standpoint that the device that injests the information is going to be pulling it from various sources, there is a process called event correlation, so thie the system looks at a multiple injestion sources to make a determination (prescribed analytics, I like prescriptive analytics because we are doing someting with the data but that is another conversation)
  • What are the most common attack vectors against your product?
    • Port 22/ssh
  • How many malicious scenarios will your logs identify?
    • Thousands
  • How hard is it to parse them so they're normalized with other similar technologies?
    • Not hard at all, we use Logwatch to view individual machines, SIEM at the edge, core and access layers and SELinux (it uses /var/log/audit/audit.log file)
  • What's the most effective way to monitor your logging for malicious activity?
    • Use an event correlation (SIEM) device that is powerful enough to make decisions (prescriptive) while at the same time allowing it to learn (it needs to be taught), but the device also needs to pull information from other network devices
  • Do you have a center of excellence for security monitoring of your technology?
    • No, we use an assortment of third-party companies to do that, let them focus on that (i.e. PaloAlto, Akamai)
  • Which standard log formats are used for output?
    • Syslog
  • How hard it is it to introduce new logs based on new attack techniques? What's your update frequency?
    • We use the information from various sources to make a determination but the system is intelligent enough to do it for us (of course human intervention is involved). Try UFW and SELinx for Linux, excellent tool at the server level and Comodo for the Server and Client.
    • Update frequency - everytime we add a virtual device, it is updated, daily

 

Todd

 
COVID-19: Latest Security News & Commentary
Dark Reading Staff 6/3/2020
Stay-at-Home Orders Coincide With Massive DNS Surge
Robert Lemos, Contributing Writer,  5/27/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
How Cybersecurity Incident Response Programs Work (and Why Some Don't)
This Tech Digest takes a look at the vital role cybersecurity incident response (IR) plays in managing cyber-risk within organizations. Download the Tech Digest today to find out how well-planned IR programs can detect intrusions, contain breaches, and help an organization restore normal operations.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-4177
PUBLISHED: 2020-06-03
IBM Security Guardium 11.1 contains hard-coded credentials, such as a password or cryptographic key, which it uses for its own inbound authentication, outbound communication to external components, or encryption of internal data. IBM X-Force ID: 174732.
CVE-2020-4180
PUBLISHED: 2020-06-03
IBM Security Guardium 11.1 could allow a remote authenticated attacker to execute arbitrary commands on the system. By sending a specially-crafted request, an attacker could exploit this vulnerability to execute arbitrary commands on the system. IBM X-Force ID: 174735.
CVE-2020-4182
PUBLISHED: 2020-06-03
IBM Security Guardium 11.1 is vulnerable to cross-site scripting. This vulnerability allows users to embed arbitrary JavaScript code in the Web UI thus altering the intended functionality potentially leading to credentials disclosure within a trusted session. IBM X-Force ID: 174738.
CVE-2020-4187
PUBLISHED: 2020-06-03
IBM Security Guardium 11.1 could disclose sensitive information on the login page that could aid in further attacks against the system. IBM X-Force ID: 174805.
CVE-2020-4190
PUBLISHED: 2020-06-03
IBM Security Guardium 10.6, 11.0, and 11.1 contains hard-coded credentials, such as a password or cryptographic key, which it uses for its own inbound authentication, outbound communication to external components, or encryption of internal data. IBM X-Force ID: 174851.