Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

7/19/2021
10:00 AM
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

7 Ways AI and ML Are Helping and Hurting Cybersecurity

In the right hands, artificial intelligence and machine learning can enrich our cyber defenses. In the wrong hands, they can create significant harm.

Artificial intelligence (AI) and machine learning (ML) are now part of our everyday lives, and this includes cybersecurity. In the right hands, AI/ML can identify vulnerabilities and reduce incident response time. But in cybercriminals' hands, they can create significant harm.

Related Content:

Deepfakes Are on the Rise, but Don't Panic Just Yet

Special Report: Building the SOC of the Future

New From The Edge: Security 101: The 'PrintNightmare' Flaw

Here are seven positive and seven negative ways AI/ML is impacting cybersecurity. 

7 Positive Impacts of AI/ML in Cybersecurity

  • Fraud and Anomaly Detection: This is the most common way AI tools are coming to the rescue in cybersecurity. Composite AI fraud-detection engines are showing outstanding results in recognizing complicated scam patterns. Fraud detection systems' advanced analytics dashboards provide comprehensive details about incidents. This is an extremely important area within the general field of anomaly detection.
  • Email Spam Filters: Defensive rules filter out messages with suspect words to identify dangerous email. Additionally, spam filters protect email users and reduce the time it takes to go through unwanted correspondence.
  • Botnet Detection: Supervised and unsupervised ML algorithms not only facilitate detection but also prevent sophisticated bot attacks. They also help identify user behavior patterns to discern undetected attacks with an extremely low false-positive rate.
  • Vulnerability Management: It can be difficult to manage vulnerabilities (manually or with technology tools), but AI systems make it easier. AI tools look for potential vulnerabilities by analyzing baseline user behavior, endpoints, servers, and even discussions on the Dark Web to identify code vulnerabilities and predict attacks.
  • Anti-malware: AI helps antivirus software detect good and bad files, making it possible to identify new forms of malware even if it's never been seen before. Although complete replacement of traditional techniques with AI-based ones can speed detection, it also increases false positives. Combining traditional methods and AI can detect 100% of malware.
  • Data-Leak Prevention: AI helps identify specific data types in text and non-text documents. Trainable classifiers can be taught to detect different sensitive information types. These AI approaches can search data in images, voice records, or video using appropriate recognition algorithms.
  • SIEM and SOAR: ML can use security information and event management (SIEM) and security orchestration, automation, and response (SOAR) tools to improve data automation and intelligence gathering, detecting suspicious behavior patterns, and automating the response depending on the input.

AI/MI is used in network traffic analysis, intrusion detection systems, intrusion prevention systems, secure access service edge, user and entity behavior analytics, and most technology domains described in Gartner's Impact Radar for Security. In fact, it's hard to imagine a modern security tool without some kind of AI/ML magic in it.

7 Negative Impacts of AI/ML in Cybersecurity

  • Data Gathering: Through social engineering and other techniques, ML is used for better victim profiling, and cybercriminals leverage this information to accelerate attacks. For example, in 2018, WordPress websites experienced massive ML-based botnet infections that granted hackers access to users' personal information.
  • Ransomware: Ransomware is experiencing an unfortunate renaissance. Examples of criminal success stories are numerous; one of the nastiest incidents led to Colonial Pipeline's six-day shutdown and $4.4 million ransom payment.
  • Spam, Phishing, and Spear-Phishing: ML algorithms can create fake messages that look like real ones and aim to steal user credentials. In a Black Hat presentation, John Seymour and Philip Tully detailed how an ML algorithm produced viral tweets with fake phishing links that were four times more effective than a human-created phishing message.
  • Deepfakes: In voice phishing, scammers use ML-generated deepfake audio technology to create more successful attacks. Modern algorithms such as Baidu's "Deep Voice" require only a few seconds of someone's voice to reproduce their speech, accents, and tones.
  • Malware: ML can hide malware that keeps track of node and endpoint behavior and builds patterns mimicking legitimate network traffic on a victim's network. It can also incorporate a self-destructive mechanism in malware that amplifies the speed of an attack. Algorithms are trained to extract data faster than a human could, making it much harder to prevent.
  • Passwords and CAPTCHAs: Neural network-powered software claims to easily break human-recognition systems. ML enables cybercriminals to analyze vast password data sets to target password guesses better. For example, PassGAN uses an ML algorithm to guess passwords more accurately than popular password-cracking tools using traditional techniques.
  • Attacking AI/ML Itself: Abusing algorithms that work at the core of healthcare, military, and other high-value sectors could lead to disaster. Berryville Institute of Machine Learning's Architectural Risk Analysis of Machine Learning Systems helps analyze taxonomies of known attacks on ML and performs an architectural risk analysis of ML algorithms. Security engineers must learn how to secure ML algorithms at every stage of their life cycle.

It is easy to understand why AI/ML is gaining so much attention. The only way to battle devious cyberattacks is to use AI's potential for defense. The corporate world must notice how powerful ML can be when it comes to detecting anomalies (for example, in traffic patterns or human errors). With proper countermeasures, possible damage can be prevented or drastically reduced.

Overall, AI/ML has huge value for protecting against cyber threats. Some governments and companies are using or discussing using AI/ML to fight cybercriminals. While the privacy and ethical concerns around AI/ML are legitimate, governments must ensure that AI/ML regulations won't prevent businesses from using AI/ML for protection. Because, as we all know, cybercriminals do not follow regulations.


DataArt's Vadim Chakryan, Information Security Officer, and Eugene Kolker, Executive Vice President, Global Enterprise Services & Co-Director, AI/ML Center of Excellence, also contributed to this article.

Andrey joined DataArt in 2016 as Chief Compliance Officer. He has more than 25 years of experience in the IT industry. He began his career as a software developer and has played many roles. He has experience in managing projects, managing programs in the medical device ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2021-38095
PUBLISHED: 2021-08-05
The REST API in Planview Spigit 4.5.3 allows remote unauthenticated attackers to query sensitive user accounts data, as demonstrated by an api/v1/users/1 request.
CVE-2021-32598
PUBLISHED: 2021-08-05
An improper neutralization of CRLF sequences in HTTP headers ('HTTP Response Splitting') vulnerability In FortiManager and FortiAnalyzer GUI 7.0.0, 6.4.6 and below, 6.2.8 and below, 6.0.11 and below, 5.6.11 and below may allow an authenticated and remote attacker to perform an HTTP request splitting...
CVE-2021-32603
PUBLISHED: 2021-08-05
A server-side request forgery (SSRF) (CWE-918) vulnerability in FortiManager and FortiAnalyser GUI 7.0.0, 6.4.5 and below, 6.2.7 and below, 6.0.11 and below, 5.6.11 and below may allow a remote and authenticated attacker to access unauthorized files and services on the system via specifically crafte...
CVE-2021-3539
PUBLISHED: 2021-08-04
EspoCRM 6.1.6 and prior suffers from a persistent (type II) cross-site scripting (XSS) vulnerability in processing user-supplied avatar images. This issue was fixed in version 6.1.7 of the product.
CVE-2021-36801
PUBLISHED: 2021-08-04
Akaunting version 2.1.12 and earlier suffers from an authentication bypass issue in the user-controllable field, companies[0]. This issue was fixed in version 2.1.13 of the product.