Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

11/18/2019
02:00 PM
John McClurg
John McClurg
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Human Nature vs. AI: A False Dichotomy?

How the helping hand of artificial intelligence allows security teams to remain human while protecting themselves from their own humanity being used against them.

Nobel Prize-winning novelist Anatole France famously opined: "It is human nature to think wisely and act foolishly." As a species, we're innately designed with — as far as our awareness extends — the highest, most profound levels of intellect, knowledge, and insight in our vast, infinite universe. But this does not equate to omniscience or absolute precision.

Humans are by no stretch of the imagination perfect. We feel pressured, we get stressed, life happens, and we end up making mistakes. It's so inevitable, in fact, that it's essentially hardwired into our DNA. And for better or worse, this aspect of human nature is both perfectly natural and resolutely expected. In most cases, the human predilection to screw up is evened out by a dogged pursuit of rectification. But in cybersecurity terms, this intent and journey happens all too slowly; this is a realm where simple mistakes can result in dire consequences at the blink of an eye.

To place this into context, a simple hack or breach can result in the loss of billions of dollars; the complete shutdown of critical infrastructure such as electric grids and nuclear power plants; the leak of classified government information; the public release of unquantifiable amounts of personal data. In many instances, these all too real "hypotheticals" — the collapse of economies, the descent of cities into chaos, the compromise of national security or the theft of countless identities — can all potentially be pinpointed back to human error around cybersecurity.

With so much at stake, it's not unsurprising that many CISOs are not confident in their employees' abilities to safeguard data. That's because most of the cybersecurity solutions used by a majority of the workforce are difficult to use and cause well-intentioned workarounds that open up new vulnerabilities in order for employees to simply be productive at their jobs.

Malicious actors don't just realize this; they use it to their ultimate advantage. Employees are only human, and social engineers excel when it comes to exploiting our human nature. But we don't want employees jettisoning their all-too-precious humanity in order to protect themselves against the ill-intentioned wiles of social engineers. Enter the helping hand of artificial intelligence (AI), which allows employees to remain human while protecting them from their own humanity being used against them. Adaptive security that's powered by AI can make up for the human error that we know can and will happen.

Employees, myself included, need help staying secure in the workplace because we're easily prone to distraction and being tricked. The goal is to never make mistakes or open companies up to vulnerability. But as France put it, our wisdom is sometimes superseded by our brash, spontaneous, emotionally-driven actions.

But can artificial intelligence really be trusted to make up for human error? Well, it all depends on who's answering this question and their perception of AI. Those with a level-headed view of AI not deeply rooted in science fiction or Hollywood tropes mostly agree that AI is an effective tool for catching and circumventing careless human error because it's unburdened by the feeble aspects, or foibles, of human nature and the cognitive limits of rationality inherent in it. As IBM's Ginni Rometty puts it: "Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we'll augment our intelligence."

AI bridges the gap between work productivity and security, bringing to fruition the concept of "invisible security" that creates a line of defense that can essentially be categorized as human-nature-proof. The fact of the matter is today's threat vectors now morph at the speed of light, or the speed of human-enabled artificial intelligence. With the help of AI and machine learning, employees possess a strong fighting chance against the theft of corporate data by malicious actors who also utilize these high-speed models and algorithms to achieve their nefarious goals.

That being said, any debate that would position the trustworthiness of humans against AI is grounded on a false dichotomy in that AI has yet to advance to the level of sentience where it can truly act or function without human intervention.

Humans and AI actually make up for each other's weaknesses — AI compensating for human nature's cognitive limits of rationality and error, and humans serving as the wizard behind AI's Oz, imbuing the technology with as much or as little power as we deem appropriate. When paired correctly and responsibly, human nature and AI can combine in a copacetic manner to foster the strongest levels of enterprise cybersecurity.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "Soft Skills: 6 Nontechnical Traits CISOs Need to Succeed."

John McClurg is Blackberry's chief information security officer. In this role, he leads all aspects of BlackBerry's information security program globally, ensuring the development and implementation of cybersecurity policies and procedures. John comes to BlackBerry from ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
MonikaGehts
50%
50%
MonikaGehts,
User Rank: Apprentice
11/26/2019 | 7:33:19 AM
A long way to
I think that the AI is impossible until humans know only up to 10% of their brains possibility/mighty. If the percentage is less then 100%, the AI can be really dangerous for human beings.
5 Ways to Up Your Threat Management Game
Wayne Reynolds, Advisory CISO, Kudelski Security,  2/26/2020
Google Adds More Security Features Via Chronicle Division
Robert Lemos, Contributing Writer,  2/25/2020
Cybersecurity Industry: It's Time to Stop the Victim Blame Game
Jessica Smith, Senior Vice President, The Crypsis Group,  2/25/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
6 Emerging Cyber Threats That Enterprises Face in 2020
This Tech Digest gives an in-depth look at six emerging cyber threats that enterprises could face in 2020. Download your copy today!
Flash Poll
State of Cybersecurity Incident Response
State of Cybersecurity Incident Response
Data breaches and regulations have forced organizations to pay closer attention to the security incident response function. However, security leaders may be overestimating their ability to detect and respond to security incidents. Read this report to find out more.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-9463
PUBLISHED: 2020-02-28
Centreon 19.10 allows remote authenticated users to execute arbitrary OS commands via shell metacharacters in the server_ip field in JSON data in an api/internal.php?object=centreon_configuration_remote request.
CVE-2020-5247
PUBLISHED: 2020-02-28
In Puma (RubyGem) before 4.3.2 and 3.12.2, if an application using Puma allows untrusted input in a response header, an attacker can use newline characters (i.e. `CR`, `LF` or`/r`, `/n`) to end the header and inject malicious content, such as additional headers or an entirely new response body. This...
CVE-2020-9447
PUBLISHED: 2020-02-28
The file-upload feature in GwtUpload 1.0.3 allows XSS via a crafted filename.
CVE-2019-10064
PUBLISHED: 2020-02-28
hostapd before 2.6, in EAP mode, makes calls to the rand() and random() standard library functions without any preceding srand() or srandom() call, which results in inappropriate use of deterministic values. This was fixed in conjunction with CVE-2016-10743.
CVE-2019-8741
PUBLISHED: 2020-02-28
A denial of service issue was addressed with improved input validation.