Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

6/10/2019
06:20 PM
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
100%
0%

Cognitive Bias Can Hamper Security Decisions

A new report sheds light on how human cognitive biases affect cybersecurity decisions and business outcomes.

It's a scenario commonly seen in today's businesses: executives read headlines of major breaches by foreign adversaries out to pilfer customers' social security numbers and passwords. They worry about the same happening to them and strategize accordingly – but in the text, they learn the breach was in a different industry, of a different size, after different data.

This incident, irrelevant to the business, distracted leaders from threats that matter to them.

It's an example of availability bias, one of many cognitive biases influencing how security and business teams make choices that keep an organization running. Availability bias describes how the frequency with which people receive information affects decisions. As nation-state attacks make more headlines, they become a greater priority among people who read about them.

"At the organizational level, we have major decision-makers deciding how much to spend on cybersecurity solutions," explains Dr. Margaret Cunningham, principal research scientist at Forcepoint and author of "Thinking about Thinking: Exploring Bias in Cybersecurity with Insights from Cognitive Science." These execs may be aware of more frequent threats like phishing, but the real problem is they're interpreting risk based on what's available in today's news cycle.

Understanding these biases can improve understanding of how employees make decisions, Cunningham continues. An academic who describes herself as "obsessed with the type of mistakes people make," she noticed human error was a common topic in cybersecurity. However, what human error is, and the many types of mistakes people make, were not.

"There's no way we can be robotic about human perception," she says. "As security specialists, we need to mitigate these risks by understanding them better." Bias, or the tendency for people to favor a person, group, or decision over another, is swayed by past and present experiences. It's common in situations where the correct choice isn't always clear, she explains.

Cybersecurity: An Emotional Roller Coaster

Cognitive bias isn't specific to cybersecurity; it's universal. Many factors that influence human behavior fly under our radar, especially in stressful situations that security pros often face.

"Bias is fluid," says Cunningham. "It's shaped by past experiences, how tired we are, how emotional we are, and honestly, being in tech is highly emotional." She points to the "warlike" language often used in cybersecurity: attack, breach, firewall. Security practitioners are especially prone to bias because they work in a field that's highly emotional and often abstract.

Six types of biases can skew security strategies: availability bias is one, but teams should also be aware of aggregate bias, which is when people infer something about an individual based on data that describes broader populations. For example, Cunningham reports, older people are often considered riskier users because of a perceived lack of familiarity with new technologies.

Confirmation bias is when someone seeks to confirm their beliefs by exclusively searching for information that supports their hunch while excluding opposing data. This is especially common in security, she says. It's typical among analysts who enter an investigation digging for an answer they really want. "[This] creates avenues for ignoring the whole picture because we're looking for what we know," says Cunningham, instead of considering other possibilities.

Another is anchoring bias, which occurs when a person locks onto a specific feature, or set of features, of data early in the decision-making process. If a value is introduced early in a sales pitch, for example, all the numbers that follow will be in the same ballpark. This plays into cybersecurity when, for example, an analyst clings to a certain value early in an investigation and fails to move away from it – even when the solution to the problem requires them to do so.

The framing effect, which affects how choices are worded, often manipulates those who buy security tools. For example, a vendor may say "one in five companies never got their data back after a ransomware attack," placing focus on the one organization that lost data instead of the four that didn't. This strategy causes security admins to buy pricey tools for low-probability risk.

Finally, there's the fundamental attribution error, which is the tendency to view people's mistakes as part of their identities rather than contextual or environmental factors. It's seen throughout security, where analysts or developers blame users for creating risks because they're less capable. There's also a self-serving bias here, as the users often blame IT.

Breaking Biases

Security pros cannot "cure" biases, says Cunningham, just as they can't cure people making mistakes.

"What we can do is become better acquainted with the types of decisions, or decision points that are frequently and predictably impacted by bias," she explains. If someone is aware of the potential for bias in a situation, they can use this awareness to avoid bias and potentially make different, more informed, choices as a result. After all, she points out, attackers understand how they can manipulate human emotion, and they're using this knowledge to their advantage.

Related Content:

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
MelBrandle
50%
50%
MelBrandle,
User Rank: Moderator
6/13/2019 | 2:01:55 AM
Need more education
It seems that it is not just bias that can hamper security decisions but also the poor misconception by the executives themselves. They need to be made aware of what security breaches truly are and how they can affect a certain company regardless of their nature of business. Once the executives have gained more knowledge, the bias would eventually disappear and would not cloud their judgment.
maxim_dsouza
50%
50%
maxim_dsouza,
User Rank: Apprentice
5/3/2020 | 10:26:42 AM
Neatly summarized
As you rightly said, the knowledge of our mental flaws can help us make better security decisions. In fact, awareness of cognitive biases helps in all aspects of life. 
COVID-19: Latest Security News & Commentary
Dark Reading Staff 7/9/2020
Introducing 'Secure Access Service Edge'
Rik Turner, Principal Analyst, Infrastructure Solutions, Omdia,  7/3/2020
Russian Cyber Gang 'Cosmic Lynx' Focuses on Email Fraud
Kelly Sheridan, Staff Editor, Dark Reading,  7/7/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Threat from the Internetand What Your Organization Can Do About It
The Threat from the Internetand What Your Organization Can Do About It
This report describes some of the latest attacks and threats emanating from the Internet, as well as advice and tips on how your organization can mitigate those threats before they affect your business. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15526
PUBLISHED: 2020-07-09
In Redgate SQL Monitor 7.1.4 through 10.1.6 (inclusive), the scope for disabling some TLS security certificate checks can extend beyond that defined by various options on the Configuration > Notifications pages to disable certificate checking for alert notifications. These TLS security checks are...
CVE-2020-10756
PUBLISHED: 2020-07-09
An out-of-bounds read vulnerability was found in the SLiRP networking implementation of the QEMU emulator. This flaw occurs in the icmp6_send_echoreply() routine while replying to an ICMP echo request, also known as ping. This flaw allows a malicious guest to leak the contents of the host memory, re...
CVE-2020-12421
PUBLISHED: 2020-07-09
When performing add-on updates, certificate chains terminating in non-built-in-roots were rejected (even if they were legitimately added by an administrator.) This could have caused add-ons to become out-of-date silently without notification to the user. This vulnerability affects Firefox ESR < 6...
CVE-2020-12422
PUBLISHED: 2020-07-09
In non-standard configurations, a JPEG image created by JavaScript could have caused an internal variable to overflow, resulting in an out of bounds write, memory corruption, and a potentially exploitable crash. This vulnerability affects Firefox < 78.
CVE-2020-12423
PUBLISHED: 2020-07-09
When the Windows DLL "webauthn.dll" was missing from the Operating System, and a malicious one was placed in a folder in the user's %PATH%, Firefox may have loaded the DLL, leading to arbitrary code execution. *Note: This issue only affects the Windows operating system; other operating sys...