Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

11/18/2019
02:00 PM
John McClurg
John McClurg
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Human Nature vs. AI: A False Dichotomy?

How the helping hand of artificial intelligence allows security teams to remain human while protecting themselves from their own humanity being used against them.

Nobel Prize-winning novelist Anatole France famously opined: "It is human nature to think wisely and act foolishly." As a species, we're innately designed with — as far as our awareness extends — the highest, most profound levels of intellect, knowledge, and insight in our vast, infinite universe. But this does not equate to omniscience or absolute precision.

Humans are by no stretch of the imagination perfect. We feel pressured, we get stressed, life happens, and we end up making mistakes. It's so inevitable, in fact, that it's essentially hardwired into our DNA. And for better or worse, this aspect of human nature is both perfectly natural and resolutely expected. In most cases, the human predilection to screw up is evened out by a dogged pursuit of rectification. But in cybersecurity terms, this intent and journey happens all too slowly; this is a realm where simple mistakes can result in dire consequences at the blink of an eye.

To place this into context, a simple hack or breach can result in the loss of billions of dollars; the complete shutdown of critical infrastructure such as electric grids and nuclear power plants; the leak of classified government information; the public release of unquantifiable amounts of personal data. In many instances, these all too real "hypotheticals" — the collapse of economies, the descent of cities into chaos, the compromise of national security or the theft of countless identities — can all potentially be pinpointed back to human error around cybersecurity.

With so much at stake, it's not unsurprising that many CISOs are not confident in their employees' abilities to safeguard data. That's because most of the cybersecurity solutions used by a majority of the workforce are difficult to use and cause well-intentioned workarounds that open up new vulnerabilities in order for employees to simply be productive at their jobs.

Malicious actors don't just realize this; they use it to their ultimate advantage. Employees are only human, and social engineers excel when it comes to exploiting our human nature. But we don't want employees jettisoning their all-too-precious humanity in order to protect themselves against the ill-intentioned wiles of social engineers. Enter the helping hand of artificial intelligence (AI), which allows employees to remain human while protecting them from their own humanity being used against them. Adaptive security that's powered by AI can make up for the human error that we know can and will happen.

Employees, myself included, need help staying secure in the workplace because we're easily prone to distraction and being tricked. The goal is to never make mistakes or open companies up to vulnerability. But as France put it, our wisdom is sometimes superseded by our brash, spontaneous, emotionally-driven actions.

But can artificial intelligence really be trusted to make up for human error? Well, it all depends on who's answering this question and their perception of AI. Those with a level-headed view of AI not deeply rooted in science fiction or Hollywood tropes mostly agree that AI is an effective tool for catching and circumventing careless human error because it's unburdened by the feeble aspects, or foibles, of human nature and the cognitive limits of rationality inherent in it. As IBM's Ginni Rometty puts it: "Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we'll augment our intelligence."

AI bridges the gap between work productivity and security, bringing to fruition the concept of "invisible security" that creates a line of defense that can essentially be categorized as human-nature-proof. The fact of the matter is today's threat vectors now morph at the speed of light, or the speed of human-enabled artificial intelligence. With the help of AI and machine learning, employees possess a strong fighting chance against the theft of corporate data by malicious actors who also utilize these high-speed models and algorithms to achieve their nefarious goals.

That being said, any debate that would position the trustworthiness of humans against AI is grounded on a false dichotomy in that AI has yet to advance to the level of sentience where it can truly act or function without human intervention.

Humans and AI actually make up for each other's weaknesses — AI compensating for human nature's cognitive limits of rationality and error, and humans serving as the wizard behind AI's Oz, imbuing the technology with as much or as little power as we deem appropriate. When paired correctly and responsibly, human nature and AI can combine in a copacetic manner to foster the strongest levels of enterprise cybersecurity.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's top story: "Soft Skills: 6 Nontechnical Traits CISOs Need to Succeed."

John McClurg is Blackberry's chief information security officer. In this role, he leads all aspects of BlackBerry's information security program globally, ensuring the development and implementation of cybersecurity policies and procedures. John comes to BlackBerry from ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
MonikaGehts
50%
50%
MonikaGehts,
User Rank: Apprentice
11/26/2019 | 7:33:19 AM
A long way to
I think that the AI is impossible until humans know only up to 10% of their brains possibility/mighty. If the percentage is less then 100%, the AI can be really dangerous for human beings.
COVID-19: Latest Security News & Commentary
Dark Reading Staff 8/10/2020
Researcher Finds New Office Macro Attacks for MacOS
Curtis Franklin Jr., Senior Editor at Dark Reading,  8/7/2020
Healthcare Industry Sees Respite From Attacks in First Half of 2020
Robert Lemos, Contributing Writer,  8/13/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: It's a technique known as breaking out of the sandbox kids.
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Changing Face of Threat Intelligence
The Changing Face of Threat Intelligence
This special report takes a look at how enterprises are using threat intelligence, as well as emerging best practices for integrating threat intel into security operations and incident response. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-20383
PUBLISHED: 2020-08-13
ABBYY network license server in ABBYY FineReader 15 before Release 4 (aka 15.0.112.2130) allows escalation of privileges by local users via manipulations involving files and using symbolic links.
CVE-2020-24348
PUBLISHED: 2020-08-13
njs through 0.4.3, used in NGINX, has an out-of-bounds read in njs_json_stringify_iterator in njs_json.c.
CVE-2020-24349
PUBLISHED: 2020-08-13
njs through 0.4.3, used in NGINX, allows control-flow hijack in njs_value_property in njs_value.c. NOTE: the vendor considers the issue to be "fluff" in the NGINX use case because there is no remote attack surface.
CVE-2020-7360
PUBLISHED: 2020-08-13
An Uncontrolled Search Path Element (CWE-427) vulnerability in SmartControl version 4.3.15 and versions released before April 15, 2020 may allow an authenticated user to escalate privileges by placing a specially crafted DLL file in the search path. This issue was fixed in version 1.0.7, which was r...
CVE-2020-24342
PUBLISHED: 2020-08-13
Lua through 5.4.0 allows a stack redzone cross in luaO_pushvfstring because a protection mechanism wrongly calls luaD_callnoyield twice in a row.