Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Security Management

9/16/2019
07:00 AM
Larry Loeb
Larry Loeb
Larry Loeb
50%
50%

NIST Tackles AI

But to prepare for something usually means you have an idea about what you are preparing for, no?

On February 11, 2019, President Donald J. Trump issued the Executive Order on Maintaining American Leadership in Artificial Intelligence. It was crafted to get NIST off its "safe space" and get into the AI standards arena.

The EO specifically directs NIST to create "a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies."

At that time, NIST said, "NIST will work other federal agencies to support the EO's principles of increasing federal investment in AI research and development, expanding access to data and computing resources, preparing the American workforce for the changes AI will bring, and protecting the United States' advantage in AI technologies."

Wait, what?

NIST will prepare the American workforce for the "changes AI will bring"? To prepare for something usually means you have an idea about what you are preparing for, no?

A threat model, so to speak. How -- exactly -- does NIST know what that AI-caused changes will be be? You know, so that they can prepare for them.

Well, they just issued a plan they call "U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools." They say that it was prepared with broad public and private sector input.

The plan brings up the concept of trustworthiness as a part of the standards process. NIST sees the need for trustworthiness standards which include "guidance and requirements for accuracy, explainability, resiliency, safety, reliability, objectivity, and security."

The plan realizes that fiat standards won't actually do much. As the plan puts it, "Standards should be complemented by related tools to advance the development and adoption of effective, reliable, robust, and trustworthy AI technologies."

Some real tool that people can use might help adoption of a standard, quite right. But describing one potential area of interest as "Tools for capturing and representing knowledge, and reasoning in AI systems" is so broad in scope that it lacks actualization details. Federal agency adoption of standards is planned for, as well. "The plan," it says, "provides a series of practical steps for agencies to take as they decide about engaging in AI standards. It groups potential agency involvement into four categories ranked from least- to most-engaged: monitoring, participation, influencing, and leading."

Things are more specific in some of the recommendations the plan makes. The plan has a goal to "Promote focused research to advance and accelerate broader exploration and understanding of how aspects of trustworthiness can be practically incorporated within standards and standards-related tools."

There's that trustworthy word again. If you need the research to get it, it doesn't seem that you have it right now. Maybe NIST is promoting the idea that demonstration of trust (or a token of trust) should be mandatory, rather than optional.

How the broad brush strokes of the plan are actually implemented by individual agencies is yet to be seen. The topic is important as the field moves forward, and the need for common efforts is great.

— Larry Loeb has written for many of the last century's major "dead tree" computer magazines, having been, among other things, a consulting editor for BYTE magazine and senior editor for the launch of WebWeek.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Cloud Security Threats for 2021
Or Azarzar, CTO & Co-Founder of Lightspin,  12/3/2020
Why Vulnerable Code Is Shipped Knowingly
Chris Eng, Chief Research Officer, Veracode,  11/30/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win an Amazon Gift Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you today!
Flash Poll
Assessing Cybersecurity Risk in Todays Enterprises
Assessing Cybersecurity Risk in Todays Enterprises
COVID-19 has created a new IT paradigm in the enterprise and a new level of cybersecurity risk. This report offers a look at how enterprises are assessing and managing cyber-risk under the new normal.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-27772
PUBLISHED: 2020-12-04
A flaw was found in ImageMagick in coders/bmp.c. An attacker who submits a crafted file that is processed by ImageMagick could trigger undefined behavior in the form of values outside the range of type `unsigned int`. This would most likely lead to an impact to application availability, but could po...
CVE-2020-27773
PUBLISHED: 2020-12-04
A flaw was found in ImageMagick in MagickCore/gem-private.h. An attacker who submits a crafted file that is processed by ImageMagick could trigger undefined behavior in the form of values outside the range of type `unsigned char` or division by zero. This would most likely lead to an impact to appli...
CVE-2020-28950
PUBLISHED: 2020-12-04
The installer of Kaspersky Anti-Ransomware Tool (KART) prior to KART 4.0 Patch C was vulnerable to a DLL hijacking attack that allowed an attacker to elevate privileges during installation process.
CVE-2020-27774
PUBLISHED: 2020-12-04
A flaw was found in ImageMagick in MagickCore/statistic.c. An attacker who submits a crafted file that is processed by ImageMagick could trigger undefined behavior in the form of a too large shift for 64-bit type `ssize_t`. This would most likely lead to an impact to application availability, but co...
CVE-2020-27775
PUBLISHED: 2020-12-04
A flaw was found in ImageMagick in MagickCore/quantum.h. An attacker who submits a crafted file that is processed by ImageMagick could trigger undefined behavior in the form of values outside the range of type unsigned char. This would most likely lead to an impact to application availability, but c...