Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

5/5/2020
02:00 PM
Erik Zouave
Erik Zouave
Commentary
Connect Directly
Twitter
LinkedIn
RSS
E-Mail vvv
100%
0%

Malicious Use of AI Poses a Real Cybersecurity Threat

We should prepare for a future in which artificially intelligent cyberattacks become more common.

Could the same automated technologies cybersecurity professionals are increasingly using to protect their enterprises also fuel attacks against them? The research bears that out, according to a report my colleague Marc Bruce and I recently completed for the Swedish Defence Research Agency.

The use of artificial intelligence (AI) tools to analyze data and predict outcomes has been a boon for many industries, including the cybersecurity and defense industries. More and more, antivirus and cyberthreat intelligence systems are using machine learning to become more efficient. For example, both the US Defence Advanced Research Projects Agency (DARPA) and the European Defence Agency (EDA) are seeking to integrate AI technologies into their cyberdefense response capabilities.

However, AI could be a double-edged sword, with some of the industry's most prominent thinkers warning of AI-supported cyberattacks. In this game of cat and mouse, foreseeing how AI might be used in malicious cyberattacks — and understanding its future potential — will better prepare and equip responders. 

This article summarizes some of the important takeaways from our report, "Artificially Intelligent Cyberattacks."

How Did We Get Here?
The vast and varied potential of AI misuse came into focus in a landmark report by several research institutions in 2018. The report showed the general potential for digital, physical, and social malicious uses of AI.

However, it had already been established that AI could play a prominent role in cyberattacks. The first AI-supported cyberattack, recorded in 2007, came courtesy of a dating chatbot conspicuously dubbed CyberLover, described as displaying an "unprecedented level of social engineering." The bot relied on natural language processing (NLP) to profile targets and generate customized chat responses containing fraudulent hyperlinks, becoming notorious for its personal data thefts. It was estimated that CyberLover could establish a new relationship every third minute.

Fast forward to 2016: DARPA organized the Grand Cyber Challenge, in which machines, not humans, were the main contestants. During the contest, AI-supported solutions were used to detect, exploit, and patch vulnerabilities. It is noteworthy that the challenge not only attracted contestants from research institutions, but also the defense industrial complex.  

More recently, amid increasing concerns about future AI misuse, the United Nations Institute for Disarmament Research reported on the normative and legal aspects of AI in cyber operations, reaffirming government responsibility to enact policies about use and misuse of new technology. Concomitantly, cybersecurity firm Darktrace, and IBM began looking into specific technical use cases for AI in cyberattacks.

Malicious Uses of AI in Cyberattacks
With that as our backdrop, it is vital to anchor the way forward in our response to AI misuse. Based on our extensive, peer-reviewed research of mainly experimental AI prototypes, AI's data aggregation capabilities appear to be top-of-mind for cyberattackers who want to leverage the technology to inform their attack plans. In the short term, the absolute strongest cases for this is in the initial reconnaissance stage of cyberattacks. Through a multitude of applications, AI technologies have shown to be supremely effective at data analysis. AI cyberthreat intelligence solutions are already available, including IBM's Watson for cybersecurity and offerings from Cylance and CrowdStrike. Hence, we can expect that AI-supported antagonists have the ability to efficiently generate intelligence on threat mitigation trends, profile targets, and generate libraries of (known) vulnerabilities at scale. 

Another malicious capability to watch is the efficiency of AI in conducting repetitive tasks. As seen in the Ticketmaster incident dating back to 2010, AI tools to defeat Captchas are readily available. The experimental research on Captcha-defeating is likewise well-established. However, repetitive tasks — such as password-guessing, brute-forcing and stealing, as well as automating exploit generation — should also be considered promising grounds where prototypical testing might mature into more advanced solutions. For example, some experiments, such as password brute-forcing and password-stealing, have displayed success rates of over 50% and 90%, respectively.

Finally, deception and manipulation seem likely capability developments stemming from AI. The case for increasingly sophisticated AI-supported phishing may seem imminent when looking at a relatively old case such as CyberLover. In reality, research about AI-supported phishing, along with AI tools to bypass phishing detection, has produced mixed findings. However, this does not negate AI's potential to statistically surpass the efficiency of human social-engineering attempts.

Already, AI-supported attacks have allegedly begun to mimic patterns of normal behavior on target networks, making them harder to detect. While network behavior analysis technology is already in use for security, research indicates this technology also could be twisted for malicious ends. Furthermore, an emergent research domain concerns the ability to attack different types of classifiers that identify patterns in data, such as spam filters that identify spam email. With new uses for NLP and other AI classifiers on the horizon, security concerns become more diverse. 

Looking Ahead
We should prepare for a future in which artificially intelligent cyberattacks become more common. As mentioned, AI's advanced data aggregation capabilities could help malicious actors make more informed choices. While these capabilities may not necessarily push a shift toward more sophisticated attacks, the potential to increase the scale of malicious activities should be of concern. Even the automation of simple attacks could aggravate trends such as data theft and fraud.

Long term, we should not discount the possibility that developments in deception and manipulation capabilities might increase and diversify the sophistication of attacks. Even experts developing AI are beginning to worry about various potential for deception, and those concerns span the fields of AI text, image, and audio generation. In a future of physical systems implementing AI, such as smart cars, an AI arms race between defenders and attackers may ultimately impact risks of physical harm.

While researchers are putting their minds to the task of securing the cyber domain with AI, one of the worrying conclusions from our research is that these efforts might not be enough to protect against the malicious use of AI. However, by anticipating what to expect from the misuse of AI, we can now begin to prepare and tailor diverse and more comprehensive countermeasures.

Marc Bruce, a research analyst intern at the Swedish Defence Research Agency, co-authored this article.

Related Articles:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's featured story: "5 Ways to Prove Security's Worth in the Age of COVID-19"

Erik Zouave is an analyst with the Swedish Defense Research Agency FOI, where he researches legal aspects of technology and security. He has been a Research Fellow with the Citizen Lab, Munk School of Global Affairs, at the University of Toronto, a Google Policy Fellow, and ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Manchester United Suffers Cyberattack
Dark Reading Staff 11/23/2020
As 'Anywhere Work' Evolves, Security Will Be Key Challenge
Robert Lemos, Contributing Writer,  11/23/2020
Cloud Security Startup Lightspin Emerges From Stealth
Kelly Sheridan, Staff Editor, Dark Reading,  11/24/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-29378
PUBLISHED: 2020-11-29
An issue was discovered on V-SOL V1600D V2.03.69 and V2.03.57, V1600D4L V1.01.49, V1600D-MINI V1.01.48, V1600G1 V2.0.7 and V1.9.7, and V1600G2 V1.1.4 OLT devices. It is possible to elevate the privilege of a CLI user (to full administrative access) by using the password [email protected]#y$z%x6x7q8c9z) for the e...
CVE-2020-29379
PUBLISHED: 2020-11-29
An issue was discovered on V-SOL V1600D4L V1.01.49 and V1600D-MINI V1.01.48 OLT devices. During the process of updating the firmware, the update script starts a telnetd -l /bin/sh process that does not require authentication for TELNET access.
CVE-2020-29380
PUBLISHED: 2020-11-29
An issue was discovered on V-SOL V1600D V2.03.69 and V2.03.57, V1600D4L V1.01.49, V1600D-MINI V1.01.48, V1600G1 V2.0.7 and V1.9.7, and V1600G2 V1.1.4 OLT devices. TELNET is offered by default but SSH is not always available. An attacker can intercept passwords sent in cleartext and conduct a man-in-...
CVE-2020-29381
PUBLISHED: 2020-11-29
An issue was discovered on V-SOL V1600D V2.03.69 and V2.03.57, V1600D4L V1.01.49, V1600D-MINI V1.01.48, V1600G1 V2.0.7 and V1.9.7, and V1600G2 V1.1.4 OLT devices. Command injection can occur in "upload tftp syslog" and "upload tftp configuration" in the CLI via a crafted filename...
CVE-2020-29382
PUBLISHED: 2020-11-29
An issue was discovered on V-SOL V1600D V2.03.69 and V2.03.57, V1600G1 V2.0.7 and V1.9.7, and V1600G2 V1.1.4 OLT devices. A hardcoded RSA private key (specific to V1600D, V1600G1, and V1600G2) is contained in the firmware images.