Tidal Cyber Blog

Gen AI in Security – Improving SOC, CTI, and Red Team Tasks

Written by Harrison Van Riper | Mar 13, 2025 1:00:00 PM

A key piece of advice when starting a company that I found valuable is “solve a specific problem.” AI has unlocked opportunities for problem-solving across the technology landscape and is driving a new breed of security startups. 

In my case, I started Zero-Shot Security with the intent to solve a specific problem within cyber threat intelligence (CTI) that was still present despite some attempts at a solution. Generative AI and recent LLM advancements were integral to my ability to create a differentiated product that, ultimately, led to an opportunity to join forces with Tidal Cyber early this year. 

AI Advancements Unlock Opportunities

Prior to the explosion of Large Language Models (LLMs), machine learning typically involved specialized models trained for highly specific, repeatable tasks and that produce predictable outputs.

 

Generative AI, particularly LLMs and other generative model types, represent a significant leap forward in capabilities; many of the natural language processing tasks that were once solved by individual ML models, can largely all be accomplished by using a single LLM. These models can extract data and contextualize it in a dynamic way to produce novel outputs on the fly. The level of technical reasoning possible when using an LLM is far greater than the level of analysis possible using individualized ML models.

The innovation I wanted to bring to security became possible because of Gen AI and LLMs and I dug deep into how I could use the technology to advance security. There are three key areas that present real problems security teams face daily and where I’ve seen Cybersecurity AI startups coalescing. 

Before I lose you to skepticism around new technologies, which runs high among cybersecurity professionals, consider two things: 

  1. The cybersecurity industry is at a disadvantage in terms of workers and time. 
  2. Threat actors are already on the AI bandwagon, using it to lower barriers to entry

The time has come to take an objective view of the technology and acknowledge where it is lacking and admit where it is very good. 

You don’t have to look far to find industry leaders spreading the same message. A LinkedIn post from Panther Founder & CTO hits the nail on the head: “The next evolution in security operations isn't about replacing your team – it’s about augmenting them with intelligent AI agents that can think, reason, and act within defined boundaries. Although the industry is skeptical, we shouldn't ignore the pace of innovation with LLMs.”

This isn’t about “AI for AI’s sake.” This is about using AI to rethink and improve how we address specific areas within security operations. Let’s explore what’s happening. 

3 Near-Term Opportunity Areas for AI in Security 

Currently, the market is trending towards automating tasks that fall under three main security roles: SOC analyst, CTI analyst, and Red Team analyst.

  1. AI in the SOC. A wave of companies has emerged that provide solutions that use AI in SOC workflows. It’s an area that’s getting lots of attention, with industry analysts and innovators examining what the future holds. SOCs are still ingesting data and alerts. However, instead of humans analyzing alerts and doing the first level of investigation, AI agents are querying tools, such as EDRs and SIEMs to do initial follow-up investigation tasks, with human reviewers in the loop. 

Offloading the bulk of the Tier-1 work to AI saves costs, improves efficiency, and provides an opportunity to upskill analysts to Tier 2 and 3 so that they can use their expertise to focus on what matters. Whether for a SOC or an MDR service provider, automating the tedium accurately is of huge value, particularly when skills are scarce, and demand is high. 

SOAR solutions, playbooks, and automated workflows for investigations have been around for years. But this new iteration looks critically at the challenges with static playbooks and workflows and revisits how we can do this better and more dynamically with new technology. The potential to cut investigation time from 25-40 minutes to just over 3 minutes without sacrificing reliability can be transformational for SOC teams.

  1. Improving CTI work with AI. A handful of solutions have existed for some time that allow you to build automated workflows that incorporate CTI capabilities. The emergence of AI startups dedicated to CTI is a natural step given CTI’s role within SecOps as an intermediary, aggregating and making sense of data and sharing it with different teams. As the level of “reasoning” within LLM models increases, CTI is a sweet spot for the technology because, on the data side, LLMs can automate data extraction with greater detail, accuracy, and relevance. On the communication side, LLMs excel at reporting with a range of technical depth – from executive leadership to detection engineers to more tactical operators.

The solution I developed, NARC, is proof of this approach. NARC solves the specific problem in CTI of mapping textual descriptions of threat behaviors to MITRE ATT&CK techniques. Thanks to what’s possible with LLMs, instead of a task that typically takes multiple hours of manual work by CTI analysts, NARC can take care of MITRE ATT&CK mappings, allowing analysts to focus more on giving those mappings context for their stakeholders.

At Tidal, we’re pushing these capabilities forward all the time. Recently, we enhanced NARC’s data extraction capabilities to areas that require contextual analysis that only LLMs can provide. Targeted country extraction is an example. If a blog mentions APT28 targeted the United States and the United Kingdom, but also mentions that APT28 is a Russian threat actor, traditional ML models can't make sense of that context around each country mentioned, resulting in all countries mentioned in the blog being extracted. LLMs, on the other hand, can pluck out the United States and United Kingdom, understanding that we only want the countries targeted, and nothing else.

Traditional ML models have limited use in areas where context evaluation and critical thinking come into play. That level of granular extraction hasn’t been possible before and has required human intervention.  

  1. Autonomous Red Teaming. As LLMs develop more understanding of computer use and can automate more tasks that are not structured, it’s possible to apply those capabilities to pen testing and red teaming. The idea is to task an AI system with a request like “target this server on this network and use whatever methods you need.” This might include pulling tools from the internet, creating new code, and launching attacks against specific systems. 

This is entirely feasible today and can dramatically change how red teaming is done currently, with static playbooks that operate on a defined set of criteria. Creating playbooks dynamically and adjusting to different threats on the fly puts defenses through the wringer more so than traditional red team exercises or pen testing can. AI doesn’t rest and can keep attacking and changing behaviors. Although it’s early days, having dynamic agents that can operate red teaming exercises at an expert level is on the horizon. 

Opportunities Await

Businesses care about the speed and increased efficiency AI can bring to SecOps. But more than that, advancements in AI open an infinite number of options for what we can do with different kinds of data, including traditionally unused data, so that we can do things we couldn’t before. Cybersecurity and AI experts are joining forces to creatively address the supply and demand problem. When looking for ways to improve security, solutions that use Gen AI and LLMs present a realm of real possibilities that even staunch skeptics should consider.