Perimeter

10/10/2018
10:30 AM
Kaan Onarlioglu
Kaan Onarlioglu
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

Security Researchers Struggle with Bot Management Programs

Bots are a known problem, but researchers will tell you that bot defenses create problems of their own when it comes to valuable data.

Bot management is all the rage in the security world. Every day, I find myself bombarded with articles proclaiming that N percent of Internet traffic is generated by bots, where N is a sufficiently alarming number to make most executives want to dash out and purchase the first bot-defense product in sight. While I can't speak for the accuracy of those reports, one thing's certain: There's a growing demand for effective bot mitigation.

I know. I work for a company that develops one such bot management solution, and I talk to customers about it daily. I do enjoy having some semblance of job security, but being the recovering academic that I am, I'm also really concerned. Conducting large-scale Internet crawls is an all too common task in many fields of security research. Does the research community fully understand the implications of bot defenses on their experiments? Do they do anything about it? I am not optimistic.

Bot is a notoriously overloaded term with numerous meanings. Today, the term is understood to mean any software that performs automated tasks over the Internet. This includes malware such as those comprising a botnet, but also benign software like search engines and information aggregators. Conveniently, this definition is aligned with the features of popular bot management solutions; businesses certainly want malware protection, but they also have strong incentives to monitor, limit, block, or even serve false content to automated requests reaching their web properties.

This is a serious problem for security researchers.

Data collection via Internet crawls is a crucial part of security research. In my own work, I crawled millions of websites and scraped application stores, code repositories, forums, vulnerability databases, and more. Think about it. Researchers meticulously design experiments, build and analyze invaluable data sets in a scientific framework, and (sometimes literally) fight to publish and present their results at prestigious conferences, only to discover that their data set was tainted by a plethora of bot defenses scattered around the Internet.

In the best case, the collected data would be biased because servers equipped with bot defenses would block the connection or return a static page without meaningful content. And if worst comes to worst, servers that return false information to thwart information harvesters could make it nigh impossible to even detect that something somewhere went wrong.

I have no reason to doubt this situation significantly affects Internet crawls and measurement studies — today. In all likelihood, we regularly work with bad data, and then publish and read papers with skewed results. But we just don't yet have insights into how data collection is affected by bot defenses.

A solution is not likely to come from the business side. Widespread adoption of bot defenses won't be tapering off anytime soon. There simply isn't enough motivation for businesses to back down from their strong stance against bots; they won't forgo protection to accommodate a few innocuous crawlers among myriad malicious hits.

As far as researchers are concerned, there's always been a certain degree of awareness of anti-crawling techniques. Researchers came up with best practices such as crafting realistic request headers, limiting connection rates, and building crawlers on headless browsers. However, modern bot defenses are well-prepared to catch these tricks; they analyze browser characteristics, connection patterns, packet structure, and even hardware inputs, and combine these observations in nontrivial ways to distinguish between humans and our robot overlords.

Yes, even the most intricate defense can be reverse-engineered and bypassed given enough resources and dedication. The bar, however, is high. Faced with a growing number of evolving bot management products, researchers are perpetually at a disadvantage.

The Need for Change
We need a paradigm shift. Here is an idea: The next time we run a crawl, let's acknowledge that the entire Internet is out there to corrupt our data, and duly deal with it! Data validation is key. Questionable data collection methodologies and low-quality data sets aren't exactly unknown territory for the research community, but we need even greater focus on this issue today.

I'm all too familiar with that urge to rush through data collection and get to the more interesting data analysis (and then submit a half-decent paper minutes before a deadline). This approach is missing the mark if it leads to inaccurate measurements and incorrect conclusions.

Data validation is a hard problem, but at the same time it's a well-explored area of computer science. We have the necessary tools, like constraint validation for predictable data, or clustering to spot outliers in complex data sets. When all else fails, manual analysis combined with sampling can be a surprisingly effective and viable approach, even for extremely large datasets. It's well worth putting in the extra time and effort to systematically validate data, in addition to writing at length about the process in publications, so that the reviewers and readers know we did our part.

Finally, I'll point out that this problem has an interesting beneficial side effect: the potential to open up unique research directions. Enabling functional yet ethical crawling techniques that are also aligned with businesses' needs is one obvious route this can take. However, I also anticipate novel techniques that can scientifically quantify the impact of bot defenses on measurements.

With better insights and visibility into this issue, we can better recognize our limitations, and pursue the promising paths toward a solution.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

Kaan Onarlioglu is a researcher and engineer at Akamai who is interested in a wide array of systems security problems, with an emphasis on designing practical technologies with real-life impact. He works to make computers and the Internet secure — but occasionally ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
High Stress Levels Impacting CISOs Physically, Mentally
Jai Vijayan, Freelance writer,  2/14/2019
Valentine's Emails Laced with Gandcrab Ransomware
Kelly Sheridan, Staff Editor, Dark Reading,  2/14/2019
2018 Was Second-Most Active Year for Data Breaches
Jai Vijayan, Freelance writer,  2/13/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
5 Emerging Cyber Threats to Watch for in 2019
Online attackers are constantly developing new, innovative ways to break into the enterprise. This Dark Reading Tech Digest gives an in-depth look at five emerging attack trends and exploits your security team should look out for, along with helpful recommendations on how you can prevent your organization from falling victim.
Flash Poll
How Enterprises Are Attacking the Cybersecurity Problem
How Enterprises Are Attacking the Cybersecurity Problem
Data breach fears and the need to comply with regulations such as GDPR are two major drivers increased spending on security products and technologies. But other factors are contributing to the trend as well. Find out more about how enterprises are attacking the cybersecurity problem by reading our report today.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-8948
PUBLISHED: 2019-02-20
PaperCut MF before 18.3.6 and PaperCut NG before 18.3.6 allow script injection via the user interface, aka PC-15163.
CVE-2019-8950
PUBLISHED: 2019-02-20
The backdoor account dnsekakf2$$ in /bin/login on DASAN H665 devices with firmware 1.46p1-0028 allows an attacker to login to the admin account via TELNET.
CVE-2019-8942
PUBLISHED: 2019-02-20
WordPress before 4.9.9 and 5.x before 5.0.1 allows remote code execution because an _wp_attached_file Post Meta entry can be changed to an arbitrary string, such as one ending with a .jpg?file.php substring. An attacker with author privileges can execute arbitrary code by uploading a crafted image c...
CVE-2019-8943
PUBLISHED: 2019-02-20
WordPress through 5.0.3 allows Path Traversal in wp_crop_image(). An attacker (who has privileges to crop an image) can write the output image to an arbitrary directory via a filename containing two image extensions and ../ sequences, such as a filename ending with the .jpg?/../../file.jpg substring...
CVE-2019-8944
PUBLISHED: 2019-02-20
An Information Exposure issue in the Terraform deployment step in Octopus Deploy before 2019.1.8 (and before 2018.10.4 LTS) allows remote authenticated users to view sensitive Terraform output variables via log files.