Fall was a busy conference season for Tidal Cyber. My colleagues and I participated in events including Black Hat, FutureCon, Health-ISAC, FS-ISAC, ATT&CKCon, and numerous regional Cybersecurity Summits. As we spoke with attendees, one of the big takeaways was that organizations are trying to understand their risk associated with using AI. Rick Gordon and I had the opportunity to dig into this in more detail at the ACSC 2024 Annual Member Conference. The theme was “Developing AI Governance and Security Practice”, and several discussions focused on how organizations can assess the risk versus the reward of bringing AI into their organization as a business tool.
The use of AI internally is increasing the attack surface. Areas of concern include:
- How threat actors can compromise AI-enabled systems to spread misinformation.
- Users unwittingly submitting personally identifiable information (PII), intellectual property (IP), or company confidential information to ask AI-enabled systems questions.
- And the flip side, answering questions that could elicit PII, IP, and other sensitive information.
- Model drift that yields unreliable answers.
Each of these scenarios can result in financial, reputation, and competitive risk. Organizations are looking for tools to help them understand what bad could happen, assess their risk, and minimize their exposure.
Prioritizing Policies
The primary way organizations tell us they are mitigating risk currently is with a policy approach. A recent survey of 700 IT leaders confirms this, finding that 33% are implementing AI-specific security policies – the top approach to risk mitigation. Whether organizations are using AI models from specific sources or creating their own models, business leaders need to understand how these models are being developed and written and know if the expected type of output is what is being generated.
From a policy and process standpoint this requires activities including:
- Understanding the data sources and systems these models are pulling information from to inform answers.
- Confirming that the right checks and balances are being put in place to manage the process.
- Revisiting the models on a regular basis to validate that the outcomes are what is expected.
- Making sure that the models being used are from company-approved sources.
Without proper controls in place, introducing AI-enabled systems as a tool for business can have damaging outcomes. For example, answering questions about refund policies using old data can put an enterprise in an embarrassing and costly situation of promising a refund that is against new company policies. Allowing questions that elicit PII or IP exposes an organization to regulatory, legal, and competitive risk that could jeopardize the overall health of the organization.
A policy-based approach is an important step towards accounting for and mitigating these risks, but many Tidal Cyber customers want to go even further. They want to quantify the effectiveness of what they are doing to deter risk, what more they can do, and if it is worth the effort.
Enter MITRE ATLAS™ and Tidal Cyber
Complementary to MITRE ATT&CK, ATLAS is a knowledge base of adversary tactics and techniques against AI-enabled systems and mitigations for AI security threats. Atlas is gaining popularity within our customer base because defenders are trying to track their risk and understand if the adversary evolves quickly are they protected or what else they can do to strengthen their security posture.
This is the ideal time for a Threat-Informed Defense approach and the Tidal Cyber platform that, alongside ATT&CK, is extensible to include ATLAS™ and expands on it to help work through these business decisions. Tidal Cyber helps you quantify your risk to AI-enabled systems, capture what you are doing to defend against threats to these systems, and keep a running catalog of the technologies and processes you have in place.
More specifically:
- ATLAS provides a starting point with a knowledge base to help defenders understand how techniques like poisoning and hallucinations work.
- Tidal Cyber Enterprise customers can evaluate their organization’s AI-enabled systems against ATLAS techniques to capture and measure their AI risk compliance, be it through policy, procedure, or technology. Understanding what you are doing with your own models to prevent these techniques and what the vendors that you are buying the AI-enabled systems from are doing to prevent attacks is crucial.
- With this baseline in place, you can continually reassess risk with updates to your Coverage Map and Confidence Score. As you look to reduce risk, you can quickly determine if a new technology is necessary, or if there is more value to be gained through new or updated policies.
Getting started early with the Tidal Cyber platform enriched with ATLAS means that as your security posture grows, you will have already assessed your policies against AI models and AI-enabled systems. As attacks targeting those systems evolve, you’ll have a framework in place to help you critically think of the questions to ask to understand how well your AI models are protected and to ensure the capability is delivered as intended. Defenders can quantify risk to the organization with the same Threat-Informed Defense approach that they use for the rest of the enterprise and communicate to key stakeholders that they’re staying ahead of attacks on AI.