Betterment, an online investment robo-advisor, is the first of its kind to surpass $5 billion in assets under management. Robo-advisors, for those unfamiliar, are automated, algorithm-based finance portfolio management services. 

Time warp

Betterment, an online investment robo-advisor, is the first of its kind to surpass $5 billion in assets under management. Robo-advisors, for those unfamiliar, are automated, algorithm-based finance portfolio management services. 

Conventional investment firms figured out that if they tap into their big data stores, they can commoditize all of the processed data, remove any human bias, then use it to penetrate new markets—investors who don’t want to trust a human (even one with 20, 30 years of experience in data analysis); those who feel automation and machine processing are more accurate and reliable; and investors who simply don’t want to deal with a person, either because of a combination of the above or because most of our lives are online and trust in and usage of online services is at an all-time high (showing no signs of abatement).

Artificial intelligence (AI) is certainly getting its day in the sun, from driverless cars to virtual assistants to a refrigerator that scans its own contents and can suggest a shopping list or feasible recipe. Powered by massive amounts of available, automatically aggregated data, services built on machine learning (moving into AI) aren’t a Jetson’s-like fantasy; more of our daily lives are moving in this direction and consumers are embracing this new functionality and convenience. Even infosec practitioners. Yet when it comes to automation in the security space, automation is viewed as something that needs to be tempered, a tool to be used sparingly, only when a human isn’t able to keep pace.

Time is fleeting, madness takes its toll

At Cloud Security World 2016, Ben Tomhave, Security Architect at New Context, presented a talk on how automation, AI, and machine learning (ML) are becoming routine, but warned that some security functions aren’t yet ready for automation primetime. Forensics, decisions about patch management (what to patch, when to patch), and approvals and authorizations, he said, are among the daily responsibilities that still require manual intervention. Great advice was offered and listeners of the talk walked away with ideas on how to stay involved in the “digital industrial revolution.”

Today, the question remains: How far we should go with automation? Going back to robo-advisors for a second, historical data about stocks and mutual funds exists, and machines are capable of predictive analysis. Machine data processing is definitely faster and more accurate than human processing, and technology is always “learning” how to adapt to environmental factors. What machines can’t account for, however, are some of the human factors that affect the stock market. Looking back at the 2008 crash, a machine could not have predicted that an overwhelming number of bankers would so crookedly manipulate lending. Yes, the data showed an inordinate number of subprime loans and borrowers with low credit scores and incomes. That may have been a warning sign, but not a determining factor, necessarily. Only humans could have identified the opportunity to short the market and taken action to bet against housing. Human intervention is what caused the market to implode, people to lose their jobs, and the world to go into a financial frenzy. Automation and ML could only have helped to a degree.

Going back to security and the parallels that can be drawn, think about automated phishing attacks that target manipulation of access controls. Humans are notoriously terrible at identifying well-crafted fake emails. Automated authorization can look for validation that the entity making the request is a real person with the correct job title, that the information contained in the email is valid against an approved database (which has to be configured and updated manually), and that the request is reasonable given all of the above. With automation and machine learning, a system could learn to look for anomalies or check against the device from which the request is coming (e.g., a request sent from a mobile device might be prone to shorter text than if a person were making a request from a laptop), but context-based authentication still needs improvement. It’s getting to the point where small nuances in text can be automatically detected, but we’re not quite there yet. Then again, we’re not quite “there” when it comes to humans catching phishing either, so it’s a tricky balance.

Despite the “not quite there” state we’re in, every time one looks up from the keyboard, it seems that a vendor is introducing a new, automated, and better service (robo-services, if you will) that will reduce the time and effort in identifying and blocking threats. The large amounts of data they are able to process in nanoseconds is a business benefit security teams can’t afford to ignore!

And the void would be calling

Security practitioners are understandably skeptical of these vendor claims and hesitate to adopt services until they’re proven. It may have been slow going at first, but access control, intrusion prevention and detection, threat data aggregation (“threat intelligence”), alert monitoring , and more is now regularly automated and accepted without hesitation, and most practitioners would absolutely agree that automation in these areas increases operational efficiency and accuracy. Automation and machine learning are helping security teams focus efforts rather than take a scattershot approach, which can be all-too-easy, given the number of systems that need monitoring and plethora of issues that require remediation. In theory, automated services won’t miss what a human is apt to, especially when manually processing data.

As mentioned above, one thing robo-services can’t do (today), however, is make a judgement call. Is that day coming? Perhaps. When it comes to threat intelligence, the “intelligence” isn’t necessarily in the data that’s been gathered from wide and far, aggregated, sorted, cross-referenced, and turned into pretty visualizations. That’s just data. A human is needed to interpret the data and make a determination on what qualifies as a risk or threat.

It’s just a jump to the left, and a step to the right

The fact remains, though, that automation and algorithm-based services and products are on the rise. We’ve seen before how consumerization drives enterprise demand, so security practitioners need to start thinking about how automation, machine learning, and AI will impact their organizations positively, not just negatively. Some of it will be a boon to information security. Some of it will introduce new and different risks and change the way the industry manages day-to-day processes. Says Tomhave, separately from his talk, “To a degree, there’s trepidation that automated enforcement will cost business, and companies would rather eat some losses than potentially lose one customer.”

Smart security and operations teams will learn to integrate automation methodically and accept it as a business enabler. With so much talk about how the industry doesn’t have enough bodies to perform all the jobs, rather than focusing on the deficit and challenges of filling jobs with skilled, talented people, perhaps a better approach to the problem is looking for ways to enhance security with new capabilities that improve both security operations and the security practitioner’s job satisfaction.