Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

7/12/2021
10:00 AM
Oleg Brodt
Oleg Brodt
Commentary
Connect Directly
LinkedIn
Twitter
RSS
E-Mail vvv
50%
50%

AI and Cybersecurity: Making Sense of the Confusion

Artificial intelligence is a maturing area in cybersecurity, but there are different concerns depending on whether you're a defender or an attacker.

The purpose of artificial intelligence (AI) is to create intelligent machines. It is used in multiple domains, including finance, manufacturing, logistics, retail, social media, healthcare, and increasingly, cybersecurity.

Related Content:

Cyber Is the New Cold War & AI Is the Arms Race

Special Report: Building the SOC of the Future

New From The Edge: 7 Skills the Transportation Sector Needs to Fuel Its Security Teams

The current discourse about AI and cybersecurity often confuses the different perspectives, as if the intersection of disciplines is monolithic and one-dimensional. Therefore, we need a common language for discussing the various and disparate intersections of AI and cybersecurity that clarifies the differences. I see three parts to the discussion: AI in the hands of defenders, AI in the hands of attackers, and adversarial AI.

AI in the Hands of Defenders
Machine learning (ML) is a subfield of AI that teaches computers to perform tasks by learning from examples rather than being explicitly programmed. Unsurprisingly, ML and its popular subbranch of deep learning (aka neural networks) are emerging as the main methods of developing cyber-defense solutions: Instead of providing a detection mechanism with predefined malware signatures, we can provide a data set of malicious and benign files and let the computer learn from them.

In simpler terms, the machine learning algorithms analyze the differences and similarities between different samples based on things like their content, how they interact with the operating system, etc., and create a model of what malware files typically look like. Then, every new file is compared against the model and classified as malicious or benign based on probability (typically). Naturally, these probabilistic solutions are far from been perfect, both in terms of failing to identify malicious behavior and in flagging benign behavior as malicious, leading to alert fatigue.

According to the latest M-Trends report, it takes 24 days, on average, to discover a network has been compromised. This is significant improvement from the average of 416 days it took blue teams to realize that an attacker was present in their network. Although we have made progress in defense, I suspect most of it can be attributed to the proliferation of ransomware attacks, where the attackers promptly expose themselves, thereby driving detection time down.

Credit: freshidea via Adobe Stock
Credit: freshidea via Adobe Stock

Since attackers remain fast and defenders remain slow, we have no choice but to delegate as many detection tasks to AI-based solutions as possible. Consequently, AI-based models are being integrated into a variety of security solutions, such as intrusion detection systems (IDS), endpoint detection and response (EDR), security information and event management (SIEM) alert prioritization, big data security analytics, and more. The main goal is to improve the performance of existing solutions, automate detection and investigation processes, and most importantly, increase detection speed by handing over tasks previously handled by human analysts.

AI in the Hands of Attackers
While AI-based technologies can improve cyber defense by creating a new generation of intelligent detection systems, they can also be misused in the hands of cyberattackers. A recent paper by Bruce Schneier emphasizes this point.

AI is, and will increasingly be, employed by cyberattackers to lower their costs and improve the effectiveness and stealth of their attacks. In fact, it is easier to justify attackers' use of AI from an economic point of view. While it is quite difficult to measure the ROI of an AI-based cyber-defense system, it is quite straightforward to measure the financial benefits for the attacker.

According to Verizon's "2021 Data Breach Investigations Report," financially motivated hacks continue to be most common — a whopping 90% of all incidents. These attacks have become commoditized, and attackers run their operations just like any other business, where the goal is to increase revenues and reduce costs. Since AI-based technologies can help with the latter, they will increasingly gain hold within cybercrime groups.

We have already witnessed how AI can help with reconnaissance, including automated high-value target discovery and phishing; it can also help with intelligent software fuzzing, yielding faster discovery of vulnerable targets. We can also expect a steep rise in deepfake social engineering attacks powered by AI-based technologies, once the technology is mature enough.

In fact, attackers can analyze every stage of the cyber kill chain and explore integrating dedicated AI-based tools into each one. While defenders' ultimate goal would be complete automation of cyber defense, for attackers, it would be complete automation of attacks.

Adversarial AI
Just like any other technology, AI can itself be vulnerable, leading to additional avenues of exploitation and a new class of cyberattacks.

We have already seen how AI-based anti-spam solutions can be fooled by a single misspelling in an email. We have also witnessed that AI-based image-recognition systems can be fooled by a single pixel change. In fact, research suggests that AI-based systems can be fooled across the board, and the more sophisticated the solution, the easier it is to successfully attack it.

Most alarming, however, is that AI-based cyber defenses are similarly vulnerable. The same attack techniques that work against other AI-based systems can be applied against AI-based malware detectors, intrusion-detection systems, and other security tools. Academic research has already demonstrated how easily most such systems can be bypassed.

In the coming years, I expect we will witness a wave of attacks against AI-based systems. Currently, however, most chief information security officers (CISO) are not paying enough attention to the security of AI-based systems. This must change before we realize — yet again — that we have delegated our most sensitive tasks to the most vulnerable systems.

Oleg serves as the R&D Director of Deutsche Telekom Innovation Labs, Israel. He also serves as the Chief Innovation Officer for [email protected] University, an umbrella organization responsible for cybersecurity-related research at Ben Gurion University, Israel. Prior to ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-38095
PUBLISHED: 2021-08-05
The REST API in Planview Spigit 4.5.3 allows remote unauthenticated attackers to query sensitive user accounts data, as demonstrated by an api/v1/users/1 request.
CVE-2021-32598
PUBLISHED: 2021-08-05
An improper neutralization of CRLF sequences in HTTP headers ('HTTP Response Splitting') vulnerability In FortiManager and FortiAnalyzer GUI 7.0.0, 6.4.6 and below, 6.2.8 and below, 6.0.11 and below, 5.6.11 and below may allow an authenticated and remote attacker to perform an HTTP request splitting...
CVE-2021-32603
PUBLISHED: 2021-08-05
A server-side request forgery (SSRF) (CWE-918) vulnerability in FortiManager and FortiAnalyser GUI 7.0.0, 6.4.5 and below, 6.2.7 and below, 6.0.11 and below, 5.6.11 and below may allow a remote and authenticated attacker to access unauthorized files and services on the system via specifically crafte...
CVE-2021-3539
PUBLISHED: 2021-08-04
EspoCRM 6.1.6 and prior suffers from a persistent (type II) cross-site scripting (XSS) vulnerability in processing user-supplied avatar images. This issue was fixed in version 6.1.7 of the product.
CVE-2021-36801
PUBLISHED: 2021-08-04
Akaunting version 2.1.12 and earlier suffers from an authentication bypass issue in the user-controllable field, companies[0]. This issue was fixed in version 2.1.13 of the product.