Attacks/Breaches

10/26/2018
12:40 PM
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

DeepPhish: Simulating Malicious AI to Act Like an Adversary

How researchers developed an algorithm to simulate cybercriminals' use of artificial intelligence and explore the future of phishing.

An idea applies in physical space and cyberspace: When you're plotting against an adversary, you want all the intel you can get on which weapons they're using and how they're using them.

The same idea drove researchers at Cyxtera Technologies to explore the weaponization of artificial intelligence (AI) in phishing attacks, which continue to evolve as cybercriminals employ more sophisticated techniques. Encryption and Web certificates, for example, have become go-to phishing tactics as attackers alter their threats to evade security defenses.

Web certificates provide a low-cost means for attackers to convince victims their malicious sites are legitimate, explains Alejandro Correa, vice president of research at Cyxtera. It doesn't take much to get a browser to display a "secure" icon – and that little green lock can make a big difference in whether a phishing scam is successful, he says. People trust it.

By the end of 2016, less than 1% of phishing attacks leveraged Web certificates, he continues. By the end of 2017, that number had spiked to 30%. It's a telling sign for the future: If attackers can find a means to easily increase their success, they're going to take it.

"We expect by the end of this year more than half of attacks are [going to be] done using Web certificates," Correa says. "There is no challenge at all for the attacker to just include a Web certificate in their websites … but it does carry a lot of effectiveness improvements."

So far, there is no standard approach for detecting malicious TLS certificates in the wild. As attackers become more advanced, defenders must learn how they operate. Correa points to the emergence of AI and machine learning in security tools, and explains how this inspired researchers at Cyxtera to learn more about how attackers might use this tech in cybercrime.

"Nowadays, in order for us to analyze the hundreds of thousands of alerts we receive every day, we have to rely on machine-learning models in order to be more productive," he says. "There is simply not enough manpower to monitor all the possible threats."

At this year's Black Hat Europe event, taking place in London in December, Correa will present the team's findings in a session entitled "DeepPhish: Simulating Malicious AI."

As part of his presentation, Correa will demo an algorithm they developed called DeepPhish, which simulates the results of the weaponization of AI by cybercriminals.

The goal was to figure out how attackers could improve their effectiveness using open source AI and machine-learning tools available to them online. "We wanted to figure out what is the best way, from an attacker's perspective, to bypass these detection algorithms," Correa says.

Researchers collected sets of URLs manually created by attackers and built algorithms to learn which patterns make them effective, meaning the URL wasn't blocked by a blacklist or defensive machine-learning algorithm. Using these URLs as a foundation, the team created a neural network designed to learn these patterns and use them to generate new URLs, which would then have a higher chance of being effective.

To test their work, they modeled the behavior of specific threat actors. In one scenario, an actor with a 0.7% effectiveness rate jumped to 20.9% effectiveness with DeepPhish applied.

"If we're going to effectively differentiate ourselves, we need to understand how that is going to be done," Correa says. He calls the results a motivation: "[It will] enhance how we may start combatting and figuring out how to defend ourselves against attackers using AI."

Related Content:

 

 

 

Black Hat Europe returns to London Dec 3-6 2018  with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Jonathan TIP/WhoisXML API
100%
0%
Jonathan TIP/WhoisXML API,
User Rank: Author
11/2/2018 | 10:29:09 AM
Malicious AI and threats
The use of AI for malicious purposes has come a long way. ML/NLP technologies were once only available to a few, but the prevalence of the Internet means they're wide-spreading quickly.

Virtually anyone -- and cybercriminals are the first in line -- can put their hands on pre-built AI models and processes that work relatively good and modify those to conduct frauds and hacks. This means more sophisticated attacks are to be expected with a surge in performance as shown by DeepPhish and similar initiatives.
How the US Chooses Which Zero-Day Vulnerabilities to Stockpile
Ricardo Arroyo, Senior Technical Product Manager, Watchguard Technologies,  1/16/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
The Year in Security 2018
This Dark Reading Tech Digest explores the biggest news stories of 2018 that shaped the cybersecurity landscape.
Flash Poll
How Enterprises Are Attacking the Cybersecurity Problem
How Enterprises Are Attacking the Cybersecurity Problem
Data breach fears and the need to comply with regulations such as GDPR are two major drivers increased spending on security products and technologies. But other factors are contributing to the trend as well. Find out more about how enterprises are attacking the cybersecurity problem by reading our report today.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-3906
PUBLISHED: 2019-01-18
Premisys Identicard version 3.1.190 contains hardcoded credentials in the WCF service on port 9003. An authenticated remote attacker can use these credentials to access the badge system database and modify its contents.
CVE-2019-3907
PUBLISHED: 2019-01-18
Premisys Identicard version 3.1.190 stores user credentials and other sensitive information with a known weak encryption method (MD5 hash of a salt and password).
CVE-2019-3908
PUBLISHED: 2019-01-18
Premisys Identicard version 3.1.190 stores backup files as encrypted zip files. The password to the zip is hard-coded and unchangeable. An attacker with access to these backups can decrypt them and obtain sensitive data.
CVE-2019-3909
PUBLISHED: 2019-01-18
Premisys Identicard version 3.1.190 database uses default credentials. Users are unable to change the credentials without vendor intervention.
CVE-2019-3910
PUBLISHED: 2019-01-18
Crestron AM-100 before firmware version 1.6.0.2 contains an authentication bypass in the web interface's return.cgi script. Unauthenticated remote users can use the bypass to access some administrator functionality such as configuring update sources and rebooting the device.