Fall was a busy conference season for Tidal Cyber. My colleagues and I participated in events including Black Hat, FutureCon, Health-ISAC, FS-ISAC, ATT&CKCon, and numerous regional Cybersecurity Summits. As we spoke with attendees, one of the big takeaways was that organizations are trying to understand their risk associated with using AI. Rick Gordon and I had the opportunity to dig into this in more detail at the ACSC 2024 Annual Member Conference. The theme was “Developing AI Governance and Security Practice”, and several discussions focused on how organizations can assess the risk versus the reward of bringing AI into their organization as a business tool.
The use of AI internally is increasing the attack surface. Areas of concern include:
Each of these scenarios can result in financial, reputation, and competitive risk. Organizations are looking for tools to help them understand what bad could happen, assess their risk, and minimize their exposure.
The primary way organizations tell us they are mitigating risk currently is with a policy approach. A recent survey of 700 IT leaders confirms this, finding that 33% are implementing AI-specific security policies – the top approach to risk mitigation. Whether organizations are using AI models from specific sources or creating their own models, business leaders need to understand how these models are being developed and written and know if the expected type of output is what is being generated.
From a policy and process standpoint this requires activities including:
Without proper controls in place, introducing AI-enabled systems as a tool for business can have damaging outcomes. For example, answering questions about refund policies using old data can put an enterprise in an embarrassing and costly situation of promising a refund that is against new company policies. Allowing questions that elicit PII or IP exposes an organization to regulatory, legal, and competitive risk that could jeopardize the overall health of the organization.
A policy-based approach is an important step towards accounting for and mitigating these risks, but many Tidal Cyber customers want to go even further. They want to quantify the effectiveness of what they are doing to deter risk, what more they can do, and if it is worth the effort.
Enter MITRE ATLAS™ and Tidal Cyber
Complementary to MITRE ATT&CK, ATLAS is a knowledge base of adversary tactics and techniques against AI-enabled systems and mitigations for AI security threats. Atlas is gaining popularity within our customer base because defenders are trying to track their risk and understand if the adversary evolves quickly are they protected or what else they can do to strengthen their security posture.
This is the ideal time for a Threat-Informed Defense approach and the Tidal Cyber platform that, alongside ATT&CK, is extensible to include ATLAS™ and expands on it to help work through these business decisions. Tidal Cyber helps you quantify your risk to AI-enabled systems, capture what you are doing to defend against threats to these systems, and keep a running catalog of the technologies and processes you have in place.
More specifically:
Getting started early with the Tidal Cyber platform enriched with ATLAS means that as your security posture grows, you will have already assessed your policies against AI models and AI-enabled systems. As attacks targeting those systems evolve, you’ll have a framework in place to help you critically think of the questions to ask to understand how well your AI models are protected and to ensure the capability is delivered as intended. Defenders can quantify risk to the organization with the same Threat-Informed Defense approach that they use for the rest of the enterprise and communicate to key stakeholders that they’re staying ahead of attacks on AI.