For every enterprise Threat Intelligence Program, there is a fine line between success, neglect, and failure. But what defines the success of a Threat Intelligence Program? The definitions of that success can vary greatly depending on the nature of the organization. Given the varying sizes, technologies, and skill levels of team members, there may not be a black and white guidebook to mapping out what success looks like. There are at least some general characteristics we can all agree on that make sense. This blog discusses some of the characteristics typically inherent to a successful Threat Intelligence Program.
1) Understanding what Constitutes a Threat Intelligence Program
One of the key factors to success in establishing any program is understanding its constituent elements. In the case of Threat Intelligence Programs, there are a few terms that are used interchangeably but actually indicate important differences.
• Cyber Security Program - The overall collaborative efforts of the members within the SOC/NOC/IR/Threat Intel teams managing their day to day interfaces. This may or may not include managing threat intelligence directly.
• Threat Data and Threat Intelligence - To summarize a Gartner definition, threat intelligence can typically be defined as a collection of evidence-based knowledge about an existing or emerging threat. This usually boils down to malicious IP addresses, domain names, URLs, email addresses, file hashes, and file names. I bring up threat data because most of it is just data until you have a means to understand what’s affecting your environment specifically (or from a strategic sense what is potentially targeting your industry). Threat intelligence will include additional context and help to determine attribution.
• Threat Intelligence Program - Threat Intelligence Programs are valuable because they are one of the most effective ways to address business risk. A Threat Intelligence Program ideally has a defined, scheduled, and systematic approach to ingest various sources/formats of threat intelligence, along with context and evidence for attribution. It also has the ability to curate information, parse, and apply that information to live traffic to understand where hits are coming from. A program like this also helps to prioritize your responses to alerts appropriately on a consistent basis.
2) Curated Threat Intelligence Sources
Un-curated intelligence can light up your various detection systems with alerts that end up overwhelming the folks responsible for investigating them. Every minute they spend investigating something that isn’t a threat is a minute longer another threat continues to exist in the environment. Curation, the process of assuring a particular source of threat intel is not riddled with false positives, can help to alleviate this challenge. There are a number of free and paid sources that assist in the curation of threat intelligence. Technically, if there is a defined process for users to manually research/enrich IOCs using free sources before applying to the environment, this can be a useful form of curation. However, the reality of that process will depend on the size of the team, the level of expertise of the team, and the amount of time an individual’s role has to commit to the process. Having the team and the skills or purchasing third party solutions certainly aids in alleviating the challenges of doing this internally.
There are a lot of OSINT sources that provide valuable intelligence, but an inherent problem with even the best OSINT feeds is that they aren’t vetted/validated. We are relying on the accuracy of the greater community. Actor groups themselves have been known to ingest open source feeds to examine if their calling cards are already known, as well as flooding these sources with false data as part of denial and deception operations.
3) Relevant Threat Intelligence Sources
“Relevance” is important to keep in mind for Threat Intelligence Programs because not every organization is at risk from the same kinds of threats. I like to divide these into two major categories - Opportunistic vs Targeted. (Clearly the landscape is more complex than this, but a blog can only be so long).
Opportunistic attacks are like fishing with a wide net, catching whatever crosses its path. Opportunistic campaigns aim to infect the general population and are a threat to all sizes of enterprises, government organizations, and even our local communities and laptops. A successful Threat Intelligence Program should always cover basic opportunistic threats like ransomware locking files, distributed botnets, information stealing malware, etc. Both open source and and premium feeds are relevant sources that provide excellent coverage of these threats.
For a Threat Intelligence Program to be successful in a larger enterprise, it must go beyond the basics. These organizations are likely to be hit with targeted attacks and are concerned with two major vectors outside the realm of opportunistic attacks - their brand and their vertical. Relevant actions and sources of information for these organizations include monitoring for domain typosquatting, spoofed websites, compromised user accounts, and specific keyword mentions on Dark Web Forums and Marketplaces about their brand and/or their industry. It’s important to remember that sometimes valuable threat intelligence won’t be in the form of an IOC. Reports of dark web activity describing intended plans, or potentially what parts of the organization are being targeted, are just as valuable. Another useful source exists in the form of a relevant ISAC (Information Sharing Alliance Center). These organizations provide valuable intelligence submitted directly from practitioners in the same industry, providing organizations awareness about developing threats targeting their sector. This information also needs a bit of curation, but without this visibility organizations miss a large part of the attack surface.
4) Threat Intelligence Context to Improve Response
If a team is used to seeing 1,000 alerts per day, chances are, they can’t review all of them. Out of those they do, they need some understanding that they’re addressing the highest priority first and not letting them slip through the cracks. For most organizations, there simply aren’t enough hours in the day nor enough payroll to maintain a team large enough to track down every alert every day. Ensuring your sources have context will provide a means of prioritization once the alerts start flowing in from the technologies where the intel is applied.
5) Strategies for High Priority Alerting
Empowered with context surrounding alerts, organizations can develop a process to respond appropriately. Some useful characteristics to map various tiers of severity are:
Directionality - Did a malicious connection originate from an infected machine inside the network, or something external trying to get in?
Success/Failure - Did any perimeter or endpoint devices shut the connection down, or was the connection successful? Do I have the proper logging capabilities to understand the difference?
Indicator Type - Is the alert connecting to something related to C2 or malware infrastructure? Perhaps the alert contains an IOC that has known associations to an APT group, or the alert is tagged with information calling out a specific campaign or malware family? Is this context available to me to prioritize?
These seemingly small but powerful pieces of information are critical in helping your team address the threats related to the highest severity level. Every day is a game of balancing time and risk, and without context this becomes an even harder game to win.
6) Actionable Alerts
While certainly not a crystal ball, prioritized alerts can provide some incredibly useful context to identify certain patterns of behavior. As we identify patterns we can be a bit more strategic and proactive with our research efforts. Are there attacks from certain geographies that are targeting you more than others? Are segments of my workstations consistently exploited by a specific type of threat? After exploring these patterns and relationships, an organization can author new content, import relevant IOCs, and develop new alerting/blocking strategies based on the vulnerabilities/tactics that consistently affect each part of the network.
Ensuring the team has appropriate context to design alerts and an agreed-upon strategy to prioritize alerts that need to be triaged everyday is paramount to the consistent and successful mitigation of threats.
7) Efficient Use of Resources
Organizations often have very difficult decisions to make every year as it relates to their internal budget constraints. In a perfect world, every organization will have a Threat Analyst, SOC, CSIRT, and NOC team. In this perfect world, there would be enough resources to go around and react to every alert. But of course, this isn’t how things usually map out. Sometimes an organization needs to allocate resources for a new security technology in favor of a new employee, or vice versa. There is an unspoken art to this balancing act, for which security leaders don’t get enough credit. Sometimes they aren’t in control of how much budget is allocated to achieve their annual goals and they have to do the best they can with what they have available. This usually results in making moderate concessions to get as close to achieving those goals as possible. Members of the security team may have to work a bit harder and learn a little more to accommodate a “wearing multiple hats” situation. Some security technologies may be taxed more than the administrator would like, or perhaps certain log sources won’t get ingested to save resources. Having regular meetings that clearly outline and give visibility to skills best suited to handle tasks under these constraints is an underrated and under utilized process.
There are a few other categories that could easily be thrown into this conversation, but again, a blog can only be so long. The culmination of a successful Threat Intelligence Program will not just be a specific feed or feeds, or a one size fits all solution. A successful Threat Intelligence Program is a calculated, scheduled, systematic approach to ingest high fidelity data, apply that data, understand context, develop proactive strategy, and respond to alerts in tune with their priority. This will involve the feeds, the technologies, and most importantly the people who understand the process and have agreed upon the strategy.