Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Endpoint Security

11/15/2017
04:38 PM
Curtis Franklin
Curtis Franklin
Curt Franklin
50%
50%

iPhone's Facial Recognition Shows Cracks

A research firm says that it has successfully spoofed the facial recognition technology used in Apple's flagship iPhone X.

Multi-factor authentication is becoming a "must" for many applications but questions remain about which factors are secure. A recent report from researchers in Vietnam has cast doubts on one promising new factor now available to millions.

In September, Apple announced the iPhone X with much fanfare and a flurry of new technology components. One of the most discussed is its facial recognition technology, which Apple has touted as being convenient, low-friction and very, very secure. Bkav, a security firm based in Vietnam, doesn't dispute the first two qualities but says that the security aspect may be somewhat over-stated.

In a test, researchers at Bkav said that they were able to defeat the iPhone X's facial recognition technology -- technology that Apple claims is not vulnerable to spoofing or mistaken identity -- using a mask made with approximately $150 in materials. While the spoof has yet to be confirmed by other researchers, the possibility raises some discomfiting possibilities.

The most troubling aspect of the demonstration is that the spoof was pulled off using a mask, after Apple went to great pains to show that their technology would only work with the living face of the device owner. In a blog post, Bkav said that they listened carefully to Apple's statements, worked to understand the AI used in the facial-recognition software, and found a vulnerability.

In a statement announcing the vulnerability, Ngo Tuan Anh, Bkav's Vice President of Cyber Security, said: "Achilles' heel here is Apple let AI at the same time learn a lot of real faces and masks made by Hollywood's and artists. In that way, Apple's AI can only distinguish either a 100% real face or a 100% fake one. So if you create a 'half-real half-fake' face, it can fool Apple's AI".

It has been pointed out that building the mask was not easy, requiring 3D scans of the owner's face, high-resolution 3D printing and multiple attempts to get the spoof right. That means that this is not a vulnerability likely to be used in any common scenario.

In the world of serious cybersecurity, though, unlikely is still possible and that's enough to take a technology out of the candidate pool for security covering high-value individuals and data. For most consumers (and for many users in business scenarios) the facial recognition technology in the iPhone X could be good enough. Before it can be considered a real replacement for more proven multi-factor authentication, though, the facial recognition technology may need more time to mature and improve.

Related posts:

— Curtis Franklin is the editor of SecurityNow.com. Follow him on Twitter @kg4gwa.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...