Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Perimeter

10/24/2012
10:32 AM
Adrian Lane
Adrian Lane
Commentary
50%
50%

When Data Errors Don't Matter

Does bad data break 'big data' analysis?

I ran across this short video comparing MySQL to MongoDB, and it really made me laugh. A tormented MySQL engineer is arguing platform choices with a Web programming newbie who only understands big data at a buzzword level. Do be careful if you watch the video with the sound on because the latter portion is not child-friendly, but this comical post captures the essence of the argument relational DB architects have against NoSQL: Big data systems fail system architects' criteria for data accuracy and consistency. Their reasoning is if the data's not accurate, who care's whether it's "Web scale?" It's garbage in, garbage out, so why bother?

But I think the question deserves more attention. In fact, I ask the question: Does some bad data in a big data cluster matter?

I think that the answer is, "No, it does not."

There are two reasons for this.

Data in the aggregate:
Most of the big data analytics are basing decisions across billions on records. Trends and decisions are not a simple "X=Y" comparison, but billions of "X=Y" comparisons. Decisions are made across the aggregate to show trends and provide a likelihood of an event. Big data clusters are not used to produce an accurate ATM statement, but rather to predict a person's potential interest in a specific product based upon prior Web search history. It's less about binary outcomes and more like fuzzy-logic.

Data velocity:
Most of the clusters I've seen in operation pour new data in at furious rate -- terabytes of data every day. Queries may favor more recent events, or they may balance their predictions on current and historic trend data. In either case, if you get some bad data into the cluster due to a hardware of software issue, it's likely to cause a short-term dip in accuracy. Tomorrow a whole new batch of data will offset, or overwrite, or mute the impact of yesterday's bad data. Data velocity and volume greatly reduce the impact of data corruption of a handful of records.

And that's the essence of big data analytics -- it's not so much about specific data points as it is metatrends.

Keep in mind that if there is one thing that's consistent with big data systems it's inconsistency. These systems are incredibly diverse in features and functions. It's dangerous to pigeonhole big data into a specific set of value statements because there are some 120 different NoSQL systems, each with add-on packages that provide near limitless functional variations. While the Web programmer newbie in the video above may not have a clue, application developers who work with big data have tuned out the relational database dogma for good reason. There are, in fact, ACID-compliant databases built on a Hadoop framework. These provide transactional consistency -- granted, in different ways than many relational platforms -- but the options exist. There are cases where relational databases are a must-have, but the decision to choose one over the other is far more complex that what's commonly portrayed.

And let's not forget that most relational systems have their own issues with data accuracy. The handful of studies I've seen on data accuracy in relational platforms -- during the past 12 years or so -- finds about 25 percent of the data stored to be inaccurate. Data entry errors, data "aging" issues where information becomes inaccurate over time, errors when collecting information, errors when aggregating and correlating, errors when loading data into the relational format, as well as other problems do exist in relational environments. This is not due to the hardware or software, but it's simply due due to how data is collected and processed between systems. It's a set of issues not often discussed, as relational databases are excellent at transactional consistency, but still have unreliable data that affects analysts even more than it does with big data clusters.

Adrian Lane is an analyst/CTO with Securosis LLC, an independent security consulting practice. Special to Dark Reading. Adrian Lane is a Security Strategist and brings over 25 years of industry experience to the Securosis team, much of it at the executive level. Adrian specializes in database security, data security, and secure software development. With experience at Ingres, Oracle, and ... View Full Bio

 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 7/9/2020
4 Security Tips as the July 15 Tax-Day Extension Draws Near
Shane Buckley, President & Chief Operating Officer, Gigamon,  7/10/2020
Russian Cyber Gang 'Cosmic Lynx' Focuses on Email Fraud
Kelly Sheridan, Staff Editor, Dark Reading,  7/7/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Threat from the Internetand What Your Organization Can Do About It
The Threat from the Internetand What Your Organization Can Do About It
This report describes some of the latest attacks and threats emanating from the Internet, as well as advice and tips on how your organization can mitigate those threats before they affect your business. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15105
PUBLISHED: 2020-07-10
Django Two-Factor Authentication before 1.12, stores the user's password in clear text in the user session (base64-encoded). The password is stored in the session when the user submits their username and password, and is removed once they complete authentication by entering a two-factor authenticati...
CVE-2020-11061
PUBLISHED: 2020-07-10
In Bareos Director less than or equal to 16.2.10, 17.2.9, 18.2.8, and 19.2.7, a heap overflow allows a malicious client to corrupt the director's memory via oversized digest strings sent during initialization of a verify job. Disabling verify jobs mitigates the problem. This issue is also patched in...
CVE-2020-4042
PUBLISHED: 2020-07-10
Bareos before version 19.2.8 and earlier allows a malicious client to communicate with the director without knowledge of the shared secret if the director allows client initiated connection and connects to the client itself. The malicious client can replay the Bareos director's cram-md5 challenge to...
CVE-2020-11081
PUBLISHED: 2020-07-10
osquery before version 4.4.0 enables a priviledge escalation vulnerability. If a Window system is configured with a PATH that contains a user-writable directory then a local user may write a zlib1.dll DLL, which osquery will attempt to load. Since osquery runs with elevated privileges this enables l...
CVE-2020-6114
PUBLISHED: 2020-07-10
An exploitable SQL injection vulnerability exists in the Admin Reports functionality of Glacies IceHRM v26.6.0.OS (Commit bb274de1751ffb9d09482fd2538f9950a94c510a) . A specially crafted HTTP request can cause SQL injection. An attacker can make an authenticated HTTP request to trigger this vulnerabi...