Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

4/2/2020
04:00 PM
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Prioritizing High-Risk Assets: A 4-Step Approach to Mitigating Insider Threats

Sound insider threat detection programs combine contextual data and a thorough knowledge of employee roles and behaviors to pinpoint the biggest risks.

How does a company with 25,000 employees and a four-person insider threat team detect and mitigate its insider threats? Or, to put the math more simply, how does one analyst continuously monitor and run cases on 6,250 employees? The short answer is: They can't.

Chief security officers and chief information security officers are challenged by tightening budgets, staffing shortages, and increasingly stringent insider threat program requirements from government customers, even as they face board and/or shareholder pressure to prevent threats to the organization's personnel, finances, systems, data, and reputation.

What's the solution? The only logical answer is: by laser-focusing only on the highest-risk employees — that is, those in positions of trust who are most likely to perpetrate fraud, information disclosure, workplace violence, espionage, sabotage, or other adverse events.

I recommend the following four-step approach to identifying and deterring high-risk insiders.

Step 1. Use all available data to establish context — early.
Context is critical to the analysis process. When analysts see an alert, or group of alerts, they ask five questions:

  1. "Who" is this person? What is their role and what are they working on? Are they a user with privileged access? Have there been past security incidents?
  2. "What?" What device is the person using? Was company IP or customer data involved?
  3. "Where?" What is their physical location (office, VPN, on travel, coffee shop)?
  4. "When?" Sunday afternoon or after hours during the workweek?
  5. "Why?" Is the activity work-related and within the scope of their role and project? Has the person done it before? Do others with similar roles do this?

User and entity behavior analytics (UEBA) tools can provide some context such as name, title, start date, status, department, location, manager, and watchlists, which may indicate access levels or high-risk activity. However, these attributes typically are used to trigger elevated risk scores only when specific technical activity occurs.

Other contextual data that companies should consider obtaining are onboarding records, work history, work patterns, travel and expense records, badge and printer logs, performance ratings, and training records.

Most importantly, contextual data should be available at the beginning of the analytical process so high-risk users can be identified straight away. Then all subsequent analytical activity can be focused on them, rather than on the never-ending stream of alerts concerning low-risk insiders.

Step 2. Identify high-risk insiders based on access and roles.
There are broad groups of insiders who can be ruled as potentially high-risk (executives, enterprise, and database administrators) while others remain low-risk (recruiters, marketing employees, and communications staff) based on their levels of access and roles.

Risk levels also can vary among employees with similar access and roles. Consider that within the finance department there is a small group of employees (Group A) that is directly engaged in compiling consolidated financial reports. Meanwhile, Group B has limited access as it prepares isolated subsets of information for the reports. Group A clearly poses greater risk of illegally disclosing information than Group B. There may even be an administrative assistant outside of either group who has access to the reports before publication in order to print them, elevating their risk level above others in the same role.

Step 3. Gather and evaluate behavioral indicators.
Malicious insiders often develop tactics and techniques to overcome limitations on their place in an organization and their level of access. Edward Snowden is an example. But even Snowden exhibited observable indicators that in hindsight and taken together could (and should) have raised alarms.

The following behaviors may indicate increased potential for insider risk:

  • Security incident history
  • Behavioral problems
  • Substantiated HR/ethics cases
  • Attendance issues
  • Unusual leave patterns
  • Foreign travel/contacts
  • Unusual hours
  • Reports of violence outside work
  • Alcohol/illegal drug abuse
  • Threatens company, manager, co-worker, customer
  • Feels under-appreciated/underpaid
  • Change in demeanor
  • Refuses work assignment
  • Disengages from team
  • Confrontation with co-worker/manager
  • Negative social media posts
  • Financial issues
  • Arrests
  • Policy violations
  • Data exfiltration
  • Accessing high-risk websites
  • Sudden deletion of files
  • Unauthorized access attempts

The gathering and use of evidence for these behaviors can be a very delicate matter for some companies, if not completely off-limits. That said, the goal is to provide critical behavioral indicators that inform the risk model described in Step 4.

Step 4. Develop a model for risk scoring based on context and behaviors.
User contextual data from Step 1, insider roles and access levels identified in Step 2, and the behavioral indicators gathered in Step 3 all need to be evaluated in a model that is purpose-built to assess and prioritize insider risk.

I have found that the most effective analytic approach is to employ a probabilistic model developed in collaboration with diverse subject-matter experts to identify high-risk individuals.

The model is essentially a risk baseline that represents the combined knowledge of subject matter experts in security, psychology, fraud, counterintelligence, IT network activity, etc. Each model node represents behaviors and stressors that, when broken down into their most basic elements, are measurable in data that can be applied as evidence to the model.

The model's outputs are risk scores for each individual, continuously updated as new data becomes available. It is vital that the model also provide transparency through its entire chain of reasoning, and that personally identifiable information be masked so that individual privacy is protected.

With the right types of data — not just from network monitoring systems but also including the behavioral indicators and open source data sources listed above — the highest-risk insiders will become quickly apparent.

Conclusion
Any sound insider threat mitigation program requires a combination of policies, processes, and technologies — and the right leadership to communicate and drive program implementation across the enterprise.

Even with all the right pieces in place, however, the program should not be only about hunting down bad actors. On the contrary, once high-risk users are identified — and assuming they haven't done anything illegal — companies should proactively engage with them, working collaboratively to reduce their risk and get them back to using their full talents and energies.

After all, there's a reason they were entrusted with insider access in the first place.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's featured story: " How to Evict Attackers Living Off Your Land."

David A. Sanders is Director of Insider Threat Operations at Haystax, a business unit of Fishtech Group, where he is responsible for deploying the Haystax Insider Threat Mitigation Suite to the company's enterprise and public-sector clients and supporting the optimization of ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 6/3/2020
Stay-at-Home Orders Coincide With Massive DNS Surge
Robert Lemos, Contributing Writer,  5/27/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
How Cybersecurity Incident Response Programs Work (and Why Some Don't)
This Tech Digest takes a look at the vital role cybersecurity incident response (IR) plays in managing cyber-risk within organizations. Download the Tech Digest today to find out how well-planned IR programs can detect intrusions, contain breaches, and help an organization restore normal operations.
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-2196
PUBLISHED: 2020-06-03
Jenkins Selenium Plugin 3.141.59 and earlier has no CSRF protection for its HTTP endpoints, allowing attackers to perform all administrative actions provided by the plugin.
CVE-2020-2197
PUBLISHED: 2020-06-03
Jenkins Project Inheritance Plugin 19.08.02 and earlier does not require users to have Job/ExtendedRead permission to access Inheritance Project job configurations in XML format.
CVE-2020-2198
PUBLISHED: 2020-06-03
Jenkins Project Inheritance Plugin 19.08.02 and earlier does not redact encrypted secrets in the 'getConfigAsXML' API URL when transmitting job config.xml data to users without Job/Configure.
CVE-2020-2199
PUBLISHED: 2020-06-03
Jenkins Subversion Partial Release Manager Plugin 1.0.1 and earlier does not escape the error message for the repository URL field form validation, resulting in a reflected cross-site scripting vulnerability.
CVE-2020-2200
PUBLISHED: 2020-06-03
Jenkins Play Framework Plugin 1.0.2 and earlier lets users specify the path to the `play` command on the Jenkins master for a form validation endpoint, resulting in an OS command injection vulnerability exploitable by users able to store such a file on the Jenkins master.