Last week, Forrester released analysis of the recent MITRE ATT&CK Evaluations, where Allie Mellon, Principal Analyst, provided important objective analysis of this round of evaluations. She discussed the value of data-driven insights into product performance against rigorous testing and applauded the addition of macOS and false positives into the evaluations. At the same time, Allie identified some important challenges such as the negative impact of high alert volumes on ingestion and lack of alert correlation. As with each ATT&CK Evaluation release, it brings into question - how should users look at the results?
Since ATT&CK was first released publicly, I have been engaging the vendor community to embrace ATT&CK, most notably as the creator of ATT&CK Evaluations. When ATT&CK Evaluations was created, the goal was to level up the security industry in its ability to focus on real adversary behaviors (ATT&CK) rather than what was then industry standard at that time, antimalware tests. In essence, it was an effort to ensure that vendors are hitting a minimum bar of ATT&CK Coverage.
ATT&CK Evaluations has been very effective in up-leveling industry in this way, and this is proven by looking at the early rounds and seeing how vendors continually improved their coverage across them. To that end, the early rounds served a specific purpose:
However, when we now look at evaluations with a critical eye, it’s becoming increasingly clear that there is a misalignment between expectations (declaring a winner) vs. reality (ensuring minimum standard of capability). Using ATT&CK Evaluations to define a winner is a much harder problem, as there is a lot of other data that needs to be considered, and frankly this ends up being a much more personal question that needs customization for the end user. This is where expertise from industry analysts and platforms like Tidal Cyber can help.
Users need to know: Is this a threat and are these behaviors we should care about, or should I be looking at the performance against other techniques? How does this tool’s strength and weaknesses align to all my other tools? Is the configuration something I would deploy in my environment? These are just some of the potential questions.
Evaluations were never intended to answer these questions because they are not contextualized to provide data that is relevant to specific user environments. Since their introduction in 2018, Threat-Informed Defense has evolved. The idea is that by looking at the enterprise from the perspective of the adversary, users gain critical insights into how to prioritize security operations and investments. This perspective helps users see more clearly how a skilled adversary would use their enterprise’s resources against them.
Threat-Informed Defense helps bridge the gap between general evaluations and customized answers and is what drove us to create the Tidal Cyber platform. Using the MITRE ATT&CK knowledge base as a critical element of the platform’s foundation, we overlay it with additional intelligence and the enterprise’s security context to:
When you get this customized picture of what you think your coverage is, now you can then more effectively layer in test results, which could include ATT&CK Evaluation results, internal red teams or Breach and Attack Simulation results to increase your confidence in coverage.
The Tidal Cyber platform operationalizes Threat-Informed Defense so users know, given relevant threats and what else they have, whether a solution is doing what it should be to reduce risk and, if not, what to do about it. Think of it as a data-driven way to evolve to get the ultimate answer - are my tools protecting me, or what can I do to improve, based on the important context of your organization.