Artificial intelligence security strategy.

The promises of artificial intelligence (AI) are, at once, exciting, mind-boggling, and confusing. InfoSec Insider spoke with artificial intelligence expert and cybersecurity thought leader, Vijay Dheap, who helps separate fact from fiction and provides guidance for companies looking into using AI as part of the defensive security strategy.

InfoSec Insider: Artificial intelligence has made huge progress in the last few years. Where are the opportunities for AI to enhance security?

Vijay Dheap: 

Looking at artificial intelligence from a defensive security perspective, there are two main areas for application:

  1. As a force multiplier: Everywhere across the spectrum in security we are short staffed. Whether it’s testing our applications, scanning for vulnerabilities, or monitoring systems, security teams across every industry, at companies of all sizes, need more resources to manage workloads. Artificial intelligence can help reduce reliance on manual approaches and allow security practitioners to focus on more strategic and risk-based initiatives.
    1. Achieving greater levels of automation: Automating low-level, repetitive tasks is nothing new to business advancement. With artificial intelligence where it is today, we can now start using automation in a fashion where the optimal system hygiene can be inferred and maintained, or determined through security testing requirements which are then performed to evaluate applications and data movement.
    2. Supporting decision making:
      1. Providing immediate access to information in context;
      2. Aggregating, synthesizing, and summarizing content;
      3. Performing pre-processing analysis to reveal patterns of behavior;
      4. AI can potentially help security practitioners prioritize efforts:
        1. It can help answer questions like: How do I get the most relevant threats to the systems I am running? What risks should I be focused on today? How can I be more aware of new information that becomes available?
  2. As active defense against adversaries: Attackers are using AI, so defensive security teams better have an approach that incorporates it too. To illustrate why this is necessary, researchers have studied adversarial neural networks and found that AI networks can learn about security systems like antivirus, IDS/IPS, etc. then use that information to overwhelm the system or find gaps in logic. Because most of our commercial security technologies today run on a standard set of rules, AI can easily identify and adapt to patterns, then show attackers our greatest areas of vulnerability. If we want to level the playing field, we have no choice but to look at AI as a way of active defense. Enterprises can inject AI into user behavior analytics, security monitoring, IDS/IPS, basically any technology built on rules, and learn our own gaps before attackers do.

II: Although it has improved by leaps and bounds, there are still some limiting factors in relying on AI entirely. Please explain what those are?

VD: The promise of AI in cyber security is greater than its current potential. This is mainly due to the lack of availability of good, reliable data. Thought most organizations feel they are swimming in data, for AI to work properly organizations need a higher quality of data at greater scale. (Remember: the more good data AI has to process, the more it can “learn.”). Most organizations have not instrumented their processes or IT systems to effectively assist in the data science process.

Another significant challenge in cybersecurity is that we face active adversaries—people who are evolving all the time and using the best tools and techniques available to them. Therefore, we must assume that successful attackers have access to similar or even more refined data to power their toolsets and strategies. 

As a result, security teams will be required to obtain additional specialized AI talent who can build the necessary competencies. However, with the industry lacking generalized talent, most security teams will find it difficult to secure those types of resources. For companies at which those resources exist, they are likely going to be allocated to revenue generating areas of the business rather than a cost center (like security). It’s a hard budgetary argument to make; the CISO would have to show how, by adding those resources, security will be paving the way for smoother business operations (e.g., fewer disruptions, less potential for data breach, etc.) and revenue growth. Not a lot of organizations have metrics to back that up.

Given these reasons organizations will end up relying on third parties whose core competency becomes AI.

In cybersecurity, we're facing active adversaries - people who are evolving all the time and using the best tools and techniques available to them. #InfoSecInsider #infosec Click to Tweet

II: Organizations often rely on third-parties (e.g., MSSPs) to handle entire areas of operation. In plenty of cases, the 3rd party provides more competency than the organization could reasonably expect to build or manage on its own. Are there problems with relying on an AI vendor?

VD: It’s not so much that there are problems (if the vendor truly is an expert), but organizations need to critically evaluate a provider before deciding to rely on it as the basis for security defense strategies. As with any provider evaluation, it’s important to see a proof of concept before pulling the trigger on a purchasing decision. If you’re at this stage, ask to see what analytics the provider collects and make sure that what they have matches your needs. A vendor can supply tons of information, but if it’s not the right data, their capabilities may not match the complexities of your organization’s needs. Every organization is different. Every company has a distinct threat landscape. Vendor marketing materials may do a great job explaining how the provider is the best at what they do, but if the data are too broad or inapplicable, your organization won’t receive optimal benefits.

At least for now, the most effective AI will have to be, at best, tailored, at worst, completely customized. AI relies on data to build the model and change. There is significant variability across deployments, so one size does not fit all. You have to look at AI from a specific use-case scenario. There are situations which will do really well, but some aren’t there yet.

In evaluating providers, it’s back to basics. You have to ask yourself: What are the risks that the security team is mandated to mitigate? What investment is available to mitigate that risk? What are the use cases that represent that risk? Relying on the use cases, how do you translate those to cost-effective solutions across your data, applications, infrastructure, users, and network? Answers to those fundamental questions will help you identify and qualify potential vendors.

Then you have to see the product in action. Make sure you’re seeing a tailored demo, and ensure the provider can address difficult questions. If the sales person is claiming, for instance, “We use a very complicated neural network,” ask why they are using that model and which problems that model solves.

In addition, probe into how the provider handles:

  • Active adversaries
  • Lack of data
  • Mixed quality of data
  • Uniqueness of your organization’s environment

Answers to these questions will prove enlightening. If the sales person can’t answer your questions, talk to an engineer or product manager. If that person can’t answer your questions clearly—or worse, complicates the situation—there are other AI providers out there that are doing great things. 

To learn about the latest tools and techniques that are moving infosec forward, attend InfoSec World 2018 in Orlando, Florida, March 19th-21st.