Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Cloud

10/5/2018
10:30 AM
Richard Ford
Richard Ford
Commentary
Connect Directly
Twitter
RSS
E-Mail vvv
100%
0%

Who Do You Trust? Parsing the Issues of Privacy, Transparency & Control

Technology such as Apple's device trust score that decides "you" is "not you" is a good thing. But only if it works well.

Trust. It's a simple word, but I think it's at the heart of a recent social media brouhaha surrounding Apple's recent revelations regarding "iTunes device trust scores." Much of the discussion has made this whole situation sound rather dystopian, but in part I think the story taps into some very fundamental — and legitimate — fears that the modern consumer has about how the minutiae of their lives has become a product to be bought, sold, and traded.

But when I dug deeper into the story, the thing that caught my attention was quite the opposite: At face value, at least, the technology is not to determine if the owner of the device is trustworthy but to protect that person from someone who has stolen or is otherwise abusing the device. Put like that, it sounds significantly better.

To me, those issues around trust are why this story resonated so strongly … well, that and the fact that simply the way a feature is described can have an incredibly powerful impact, both positive and negative, on our psyche. On the one hand, we don't want our devices to decide if they trust us or not — that feels like only a few mouse clicks away from HAL calmly intoning "I'm sorry, Dave, I'm afraid I can’t do that." No pod bay doors for you!

On the other hand, I think that the concept of trust is woefully underused as a mechanism for providing protection for end users. In part, that's based on my own personal experiences working with companies that know everything there is to know about you. We have a right to be skeptical, and that skepticism comes back to the simple word we began with: trust.

Let's look at a hypothetical. Nobody wants someone to do bad things to their accounts from their phone. Thus, technology that decides "you" are "not you" is a good thing, if it works well. Even better, if you retain absolute control over the data used to make that decision, how it is used, and how it is protected, the overall privacy exposure is minimal. In this case, Apple sounds like it's doing the right thing with respect to privacy. Quoting from the same VentureBeat article, Apple says that the "only data it receives is the numeric score, which is computed on-device using the company's standard privacy abstracting techniques, and retained only for a limited period, without any way to work backward from the score to user behavior." So far, so good.

The breakdown here is the lack of trust most users have in services that offer them "better" in exchange for being able to access their data. Even if the provider of the service makes claims about protection of privacy or the single use of data collection, there's a healthy degree of suspicion among consumers. Trusting that a company is both well-intentioned in accessing one's data and is capable of actually implementing appropriate protections around it is a bit of a stretch in the current climate. It's interesting that consumers continue to use services like that — but I think it's safe to say it makes them uneasy. And it's a matter of trust.

Repairing damaged consumer trust is going to take time. We've seen some good progress on the legal front with the adoption of laws such as the EU's General Data Protection Regulation, but worldwide, the legislative framework is a patchwork at best. Furthermore, laws always lag sorely behind technology and, of course, there's always someone who's willing to run the risk of coloring outside these legal lines in order to make a quick buck (or ruble). In the interim, the solution is simple: Let's opt instead for control.

Control may seem oddly orthogonal to trust, but in fact it's related. As I like to think of it, trust is "a promise as yet unfulfilled." It's a bet, if you like, on the actions of another. Control, on the other hand, is a way of ensuring that outcome or action. It's a substitute (and a poor one at that) for trust, but it can bridge the gap until trust is established. With control, we can be reasonably sure of what's going to happen, in advance. By all means, build these systems with privacy baked in (privacy by design is a wonderful thing!) but then prove it. Open the system up to third-party inspection and audit. Transparency is a wonderful way of demonstrating what's really happening. It's hard, and it's imperfect — but it's a start.

If the best companies start actually doing this, everyone wins. Trust and reputation are powerful forces for good, and we need to harness them if we're to make progress. There's nothing really wrong with a device assessing a user's trustworthiness, but without the user trusting the system in turn, it's predestined to fail. Until we have bidirectional trust, transparency is the best way forward — there's no shortcut.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Dr. Richard Ford is the chief scientist for Forcepoint, overseeing technical direction and innovation throughout the business. He brings over 25 years' experience in computer security, with knowledge in both offensive and defensive technology solutions. During his career, ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Aviation Faces Increasing Cybersecurity Scrutiny
Kelly Jackson Higgins, Executive Editor at Dark Reading,  8/22/2019
Microsoft Tops Phishers' Favorite Brands as Facebook Spikes
Kelly Sheridan, Staff Editor, Dark Reading,  8/22/2019
Capital One Breach: What Security Teams Can Do Now
Dr. Richard Gold, Head of Security Engineering at Digital Shadows,  8/23/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
7 Threats & Disruptive Forces Changing the Face of Cybersecurity
This Dark Reading Tech Digest gives an in-depth look at the biggest emerging threats and disruptive forces that are changing the face of cybersecurity today.
Flash Poll
The State of IT Operations and Cybersecurity Operations
The State of IT Operations and Cybersecurity Operations
Your enterprise's cyber risk may depend upon the relationship between the IT team and the security team. Heres some insight on what's working and what isn't in the data center.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-15540
PUBLISHED: 2019-08-25
filters/filter-cso/filter-stream.c in the CSO filter in libMirage 3.2.2 in CDemu does not validate the part size, triggering a heap-based buffer overflow that can lead to root access by a local Linux user.
CVE-2019-15538
PUBLISHED: 2019-08-25
An issue was discovered in xfs_setattr_nonsize in fs/xfs/xfs_iops.c in the Linux kernel through 5.2.9. XFS partially wedges when a chgrp fails on account of being out of disk quota. xfs_setattr_nonsize is failing to unlock the ILOCK after the xfs_qm_vop_chown_reserve call fails. This is primarily a ...
CVE-2016-6154
PUBLISHED: 2019-08-23
The authentication applet in Watchguard Fireware 11.11 Operating System has reflected XSS (this can also cause an open redirect).
CVE-2019-5594
PUBLISHED: 2019-08-23
An Improper Neutralization of Input During Web Page Generation ("Cross-site Scripting") in Fortinet FortiNAC 8.3.0 to 8.3.6 and 8.5.0 admin webUI may allow an unauthenticated attacker to perform a reflected XSS attack via the search field in the webUI.
CVE-2019-6695
PUBLISHED: 2019-08-23
Lack of root file system integrity checking in Fortinet FortiManager VM application images of all versions below 6.2.1 may allow an attacker to implant third-party programs by recreating the image through specific methods.