Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security //

AI

2/2/2018
08:05 AM
Simon Marshall
Simon Marshall
Simon Marshall
50%
50%

Allure Seed Round Funds AI Security

Flush with $5.3 million in funding, Allure Security is taking a novel approach to data security by using AI and machine learning. CTO Dr. Salvatore J. Stolfo sat down with Security Now to explain how it works.

Allure Security, a Boston-based "data loss detection" startup, recently closed a $5.3 million seed funding round led by GlassWing Ventures.

Allure deploys artificial intelligence in a new approach to securing data itself, rather than the network or endpoints. The firm's product was initially developed at Columbia University in New York with $10 million in funding from the Defense Advanced Research Projects Agency (DARPA).

Security Now contributing editor Simon Marshall spoke with Allure's CTO Dr. Salvatore J. Stolfo, who holds 47 patents, about the development and application of AI in cybersecurity.

Security Now: You've been professor of AI at Columbia University since 1979. What was the "light bulb moment" that drove you to develop this technology? What could you foresee that others could not?

Salvatore Stolfo: I had the idea for "behavior-based security" after my years of consulting for Citicorp in the 1980s on its credit card fraud system, and I proposed a project to DARPA in 1996 to study machine learning applied to security. The idea was to use machine learning as a security technique to detect malicious and abnormal behavior. I thought of the concept as quite general applied to any computational object or entity, and ultimately data and data flows. I focused my attention specifically on data behavior later, about 2006 or so. It occurred to me that that was the key to stopping data loss, essentially tracking data, particularly planted bogus decoy data.

SN: What did the commercialization of that look like?

SS: The key insight was to place beacons inside documents so they could be tracked anywhere on the Internet, an idea I had well over a decade ago. I observed commercial "solutions" were aimed at problems that were somewhat pedantic and at least five years behind the sophistication understood in government-oriented environments. I recognized it would be a slow process for the commercial space to catch up with more advanced thinking about their security problems. But I'm a patient person.

SN: Allure claims data loss detection and response (DDR) is a new category in cybersecurity, how so?

SS: DDR is an approach to data security that focuses on the data itself, rather than the surrounding users and IT infrastructure. With DDR, you enhance the data with self-activating beacons, allowing you to track and protect documents wherever they go, inside or outside the organization’s network. It’s a new approach to the age-old problems of data loss and data control.

SN: How does this measure up versus securing the network or endpoints?

SS: The advantage of securing the data itself is that you're not relying on network and endpoint measures that inevitably fail. The challenge is that you really need to understand the constructs of the data you’re protecting in order to effectively secure it.

SN: How do you pitch securing the data itself to a market ingrained in securing the infrastructure around the data? Does anyone "get it?"

SS: It is a new approach, but we've been pleasantly surprised by how quickly companies and government agencies really do "get it." We quickly see heads nodding, and often people we talk to start coming up with use cases we hadn’t originally thought of.

SN: Please describe the role machine learning plays here.

SS: One of the key capabilities is the automatic generation of highly believable decoy documents and deceptive data that are automatically strategically placed in an enterprise's environment. For the very sophisticated threats we detect, "believability" of the content hackers are targeting for exfiltration is very important.

AI and machine learning is crucial to generate unbounded amounts of deceptive material in real-time. We devised a novel way to do this by placing decoys in strategic locations that are enticing to attackers. With the addition of beacons in these materials, we detect data exfiltration and where documents were remotely opened. This is a very strong signal of real attacks, the kind of alert security personnel want to know.

SN: Where does the data to train the machine learning come from?

SS: We use AI and machine learning techniques in many ways. Perhaps the most interesting is the unique way we generate deceptive materials. The algorithms we use are applied to archives of old data from which we learn how to modify that content to make it appear recent. There are untold stores of old data available for us to use.

SN: How did the application of AI get this point?

SS: It was my idea to apply AI and machine learning to security problems that were not obvious twenty years ago. Conventional wisdom was focused on prevention, with the mindset that one could build a secure system from hardware architectures up the stack to user interfaces. I thought that was a fool's errand and would fail to solve the problem; no system would be absolutely secure.

Thus, detection technology was a required element of any security architecture and that's where I focused. The complexity of the communication behaviors, amount and kind of data flowing through today's systems creates a context where detecting attacks is very hard, and cannot entirely rely upon human experts to devise the technologies. Only machine learning can do an effective job to learn what is a malicious behavior and what is an expected normal behavior.

SN: What's the potential you believe that AI has to be a force for good in the cybersecurity world?

SS: I cannot imagine a better good for the world than to create security solutions that make it more difficult for attackers to win. My goal is to make the Internet safe. There just isn't enough human expertise and work effort to devise a very good detection system by hand. Thank goodness, the community now knows that machine learning is the only hope for securing our systems.

Editor's note: This article was condensed and edited for clarity.

Related posts:

— Simon Marshall, Technology Journalist, special to Security Now

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/17/2020
Cybersecurity Bounces Back, but Talent Still Absent
Simone Petrella, Chief Executive Officer, CyberVista,  9/16/2020
Meet the Computer Scientist Who Helped Push for Paper Ballots
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/16/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-8225
PUBLISHED: 2020-09-18
A cleartext storage of sensitive information in Nextcloud Desktop Client 2.6.4 gave away information about used proxies and their authentication credentials.
CVE-2020-8237
PUBLISHED: 2020-09-18
Prototype pollution in json-bigint npm package < 1.0.0 may lead to a denial-of-service (DoS) attack.
CVE-2020-8245
PUBLISHED: 2020-09-18
Improper Input Validation on Citrix ADC and Citrix Gateway 13.0 before 13.0-64.35, Citrix ADC and NetScaler Gateway 12.1 before 12.1-58.15, Citrix ADC 12.1-FIPS before 12.1-55.187, Citrix ADC and NetScaler Gateway 12.0, Citrix ADC and NetScaler Gateway 11.1 before 11.1-65.12, Citrix SD-WAN WANOP 11....
CVE-2020-8246
PUBLISHED: 2020-09-18
Citrix ADC and Citrix Gateway 13.0 before 13.0-64.35, Citrix ADC and NetScaler Gateway 12.1 before 12.1-58.15, Citrix ADC 12.1-FIPS before 12.1-55.187, Citrix ADC and NetScaler Gateway 12.0, Citrix ADC and NetScaler Gateway 11.1 before 11.1-65.12, Citrix SD-WAN WANOP 11.2 before 11.2.1a, Citrix SD-W...
CVE-2020-8247
PUBLISHED: 2020-09-18
Citrix ADC and Citrix Gateway 13.0 before 13.0-64.35, Citrix ADC and NetScaler Gateway 12.1 before 12.1-58.15, Citrix ADC 12.1-FIPS before 12.1-55.187, Citrix ADC and NetScaler Gateway 12.0, Citrix ADC and NetScaler Gateway 11.1 before 11.1-65.12, Citrix SD-WAN WANOP 11.2 before 11.2.1a, Citrix SD-W...