Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Operational Security

10/5/2017
02:56 PM
Simon Marshall
Simon Marshall
Simon Marshall
50%
50%

Finding the AI ROI

Is AI a good security investment? Many say yes, but it depends on how you deploy your artificial intelligence.

The application of AI into security has an exciting future, but is it really paying its way? Can anyone in security, hand on heart, say definitively that AI is keeping them ahead of bad actors and automated infections?

Despite AI-based security being relatively green, the answer is "yes" according to a new US and Europe study that looked at AI security detection and prevention rates versus security teams using security software sans AI.

About three quarters of those surveyed said they had been able to prevent more breaches due to AI-powered tools. In about 80% of issues, AI was quicker than regular security teams at spotting threats.

Apart from AI raising the detection and prevention bar significantly, the study suggests that teams are using AI as an adjunct to their wisdom and skill, but leaning on AI to give the team a head-start.

Conventional wisdom that very early adopters find it hard to adopt and profit from new technologies doesn't seem to apply here, even though AI is not just a new technology, it's a new paradigm. So do security professionals feel that crossed the chasm into the new paradigm?

Daniel Doimo, president and COO at Cylance, which published the survey results, said, "Executives that were first to make the leap of faith in AI have been the first to begin experiencing the rewards, particularly in the prevention of cyberattacks. Over the next year, I only expect to see this trend accelerate."

About 65% of security heads stated they expected to reach ROI within two years. For the have-nots, eight in ten are confident their boards and the C-suite are on the case when it comes to prioritizing AI adoption. The ROI comes primarily from AI being an automation technology, freeing teams to work on other projects. Also, enterprise machines go down less frequently, saving troubleshooting resources and therefore minimizing the cost of stray data.

Cylance's AI model is based on attempting to eliminate files being accessed and run by interrogating them and either lowering a barrier or lifting it. Theoretically, this means within milliseconds, there's a determination of whether the file is malicious or not.

I had a one-on-one with Cylance's data scientist to find out more about how new AI technology is being advanced in 'generations' of capability. Today's highest level of AI technology -- generation three -- is not simply flying from the nest into the wild, and continues to be beak-fed until it matures further.

Nurturing AI from first to second generation involves adding more features on top of the base engine, but being careful not to interpret the addition of more features as resulting in greater detection accuracy. Getting from second to third base involves greater operational sophistication in addition to further technical machine learning improvements.

"Entry into third generation cannot be achieved only by increasing the number of AI features and [machine learning] samples," Homer Strong, chief data scientist at Cylance, told SecurityNow.

"[However,] a major practical obstacle to entering the third generation is to find efficient ways for malware analysts to provide feedback and oversight over the model." So, in a testing environment, AI is heavily reliant on development teams to tell it what to do and how to perform.


Want to learn more about the technology and business opportunities and challenges for the cable industry in the commercial services market? Join Light Reading in New York on November 30 for the 11th annual Future of Cable Business Services event. All cable operators and other service providers get in free.

The objective of third-generation AI is to provide a prevention-first mode of operation at the perimeter. Yet, more and more businesses are busy resigning themselves to the notion that an attack is inevitable, and therefore they're looking to protect data already in network systems within the perimeter.

"Unfortunately, I would agree that we do see this sort of stance in the market frequently. And frankly, we, the security vendors, are partly to blame," said Steve Salinas, Cylance's senior product marketing manager.

AI is somewhat reversing this resignation back to the ideology of security at the perimeter -- because it is faster than standard technology. "Security vendors have done a really good job of convincing the market that compromise is inevitable. Cylance disagrees," he said.

Related posts:

— Simon Marshall, Technology Journalist, special to Security Now

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Manchester United Suffers Cyberattack
Dark Reading Staff 11/23/2020
As 'Anywhere Work' Evolves, Security Will Be Key Challenge
Robert Lemos, Contributing Writer,  11/23/2020
Cloud Security Startup Lightspin Emerges From Stealth
Kelly Sheridan, Staff Editor, Dark Reading,  11/24/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-20934
PUBLISHED: 2020-11-28
An issue was discovered in the Linux kernel before 5.2.6. On NUMA systems, the Linux fair scheduler has a use-after-free in show_numa_stats() because NUMA fault statistics are inappropriately freed, aka CID-16d51a590a8c.
CVE-2020-29368
PUBLISHED: 2020-11-28
An issue was discovered in __split_huge_pmd in mm/huge_memory.c in the Linux kernel before 5.7.5. The copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check, aka CID-c444eb564fb1.
CVE-2020-29369
PUBLISHED: 2020-11-28
An issue was discovered in mm/mmap.c in the Linux kernel before 5.7.11. There is a race condition between certain expand functions (expand_downwards and expand_upwards) and page-table free operations from an munmap call, aka CID-246c320a8cfe.
CVE-2020-29370
PUBLISHED: 2020-11-28
An issue was discovered in kmem_cache_alloc_bulk in mm/slub.c in the Linux kernel before 5.5.11. The slowpath lacks the required TID increment, aka CID-fd4d9c7d0c71.
CVE-2020-29371
PUBLISHED: 2020-11-28
An issue was discovered in romfs_dev_read in fs/romfs/storage.c in the Linux kernel before 5.8.4. Uninitialized memory leaks to userspace, aka CID-bcf85fcedfdd.