Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

6/13/2019
10:30 AM
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

The Rise of 'Purple Teaming'

The next generation of penetration testing represents a more collaborative approach to old fashioned Red Team vs. Blue Team.

In 1992, the film Sneakers introduced the term "Red Team" into popular culture as actors Robert Redford, Sydney Poitier, Dan Aykroyd, David Strathairn, and River Phoenix portrayed a team of security experts who hire themselves out to organizations to test their security systems by attempting to hack them.

This was a revolutionary concept at the time — the term "penetration test" didn't even exist yet, and the idea of a friendly security team trying to break through a company's defenses wasn't exactly commonplace. Today, penetration testing is an important part of any cybersecurity system, and both internal and external Red Teams play a critical role in that process.

But they don't do it alone. Organizations often employ "Blue Teams," referring to the internal security team tasked with defending against both real and simulated attacks. If this raises your curiosity about whether and how closely Red Teams and Blue Teams collaborate in security testing, then you've pinpointed the fast-rising cybersecurity trend of "Purple Teaming."

What Makes Purple Teaming Different?
For years, organizations have run penetration tests similarly: The Red Team launches an attack in isolation to exploit the network and provide feedback. The Blue Team typically knows only that an evaluation is in progress and is tasked to defend the network as if an actual attack were underway. 

The most important distinction between Purple Teaming and standard Red Teaming is that the methods of attack and defense are all predetermined. Instead of attacking the network and delivering a post-evaluation summary of finding, the Red Team identifies a control, tests ways to attack or bypass it, and coordinates with the Blue Team in ways that either serve to improve the control or defeat the bypass. Often the teams will sit side by side to collaborate and truly understand outcomes.

The result is that teams are no longer limited to identifying vulnerabilities and working based on their initial assumptions. Instead, they are testing controls in real time and simulating the type of approach that intruders are likely to utilize in an actual attack. This shifts the testing from passive to active. Instead of working to outwit each other the teams can apply the most aggressive attack environments and conduct more complex "what-if" scenarios through which security controls and processes can be understood more comprehensively and fixed before a compromise.

How Deception Technology Adds Value to Penetration Testing
Part of what makes Red Teaming and Purple Teaming so valuable is they provide insight into the specific tactics and approaches that attackers might use. Organizations can enhance this visibility by incorporating deception technology into the testing program. The first benefit comes from detecting attackers early by enticing them to engage with decoys or deception lures. The second comes from gathering full indicators of compromise (IOCs) and tactics, techniques, and procedures (TTPs) into lateral movement activity. This significantly enhances visibility into how and when attackers circumvent security controls, enriching the information that typically results from these exercises.

Cyber deceptions deploy traps and lures on the network without interfering with daily operations. A basic deployment can easily be completed in under a day, providing the Blue Team an additional detection mechanism that blends in with the operational environment. This creates more opportunities to detect when the Red Team bypasses a defensive control, forcing team members to be more deliberate with their actions and making simulated attack scenarios more realistic. It also offers a truer test of the resiliency of the organization's security stack and the processes it has in place to respond to an incident.

The rise of Purple Teaming has changed the way many organizations conduct their penetration tests by providing a more collaborative approach to old-fashioned Red Team vs. Blue Team methodology. The increased deployment of deception technology in cybersecurity stacks has further augmented the capabilities of both the Red and Blue teams by allowing them to adopt a more authentic approach to the exercises.

Related Content:

Joseph Salazar is a veteran information security professional, with both military and civilian experience.  He began his career in information technology in 1995 and transitioned into information security in 1997.  He is a retired Major from the US Army ... View Full Bio
 

Recommended Reading:

Comment  | 
Print  | 
More Insights
Comments
Threaded  |  Newest First  |  Oldest First
tdsan
50%
50%
tdsan,
User Rank: Ninja
5/11/2020 | 1:53:07 PM
Interesting article
I had a question for Salazar, is this person more of a researcher; someone who provides insight into the steps the Red-team (White Hat hackers) and Blue-Team (Defense group) taken ensuring they use are in line with what a potential attacker might pursue or execute?

Please advise and thank you.

Todd
COVID-19: Latest Security News & Commentary
Dark Reading Staff 9/25/2020
Hacking Yourself: Marie Moe and Pacemaker Security
Gary McGraw Ph.D., Co-founder Berryville Institute of Machine Learning,  9/21/2020
Startup Aims to Map and Track All the IT and Security Things
Kelly Jackson Higgins, Executive Editor at Dark Reading,  9/22/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
How IT Security Organizations are Attacking the Cybersecurity Problem
How IT Security Organizations are Attacking the Cybersecurity Problem
The COVID-19 pandemic turned the world -- and enterprise computing -- on end. Here's a look at how cybersecurity teams are retrenching their defense strategies, rebuilding their teams, and selecting new technologies to stop the oncoming rise of online attacks.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15208
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, when determining the common dimension size of two tensors, TFLite uses a `DCHECK` which is no-op outside of debug compilation modes. Since the function always returns the dimension of the first tensor, malicious attackers can ...
CVE-2020-15209
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, a crafted TFLite model can force a node to have as input a tensor backed by a `nullptr` buffer. This can be achieved by changing a buffer index in the flatbuffer serialization to convert a read-only tensor to a read-write one....
CVE-2020-15210
PUBLISHED: 2020-09-25
In tensorflow-lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, if a TFLite saved model uses the same tensor as both input and output of an operator, then, depending on the operator, we can observe a segmentation fault or just memory corruption. We have patched the issue in d58c96946b and ...
CVE-2020-15211
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices f...
CVE-2020-15212
PUBLISHED: 2020-09-25
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger writes outside of bounds of heap allocated buffers by inserting negative elements in the segment ids tensor. Users having access to `segment_ids_data` can alter `output_index` and then write to outside of `outpu...