Zero trust networking was introduced by Forrester Research back in 2009. The premise behind zero trust was (and continues to be) fairly self-descriptive: Computer networks, in and of themselves, are inherently untrustworthy. It seems like an obvious concept in today’s networking environment, but at the time, the tools and techniques used to secure networks were built and implemented based on a “castle and moat” analogy.

In other words, organizations should build strong fortifications around their internally-controlled networks to keep the bad guys out. The walls were high and the moats wide. Therefore, anything permitted into the network was considered trustworthy because it had passed some sort of validation or authorization test, usually a firewall, network access controls, intrusion prevention, antivirus, etc., or some combination thereof. As a result, because it had passed the security “checkpoint,” internal communication was allowed to travel rather freely, uninspected.

Over time, and as networks evolved, organizations started seeing the failures in this method of network security. Just because an IP address was trusted and allowed to enter the network, there was no reliable way to prevent malicious software from piggybacking on that communication. And what about software or applications built and deployed inside the network, or worse, in a container or cloud hosted off premises but accessible by networked employees and/or customers? Security practitioners needed something better, but available tools weren’t doing the trick. New techniques like microsegmentation emerged, but building out these projects is time-consuming and expensive, not to mention difficult to manage without a robust operations team at the ready.

For a few years, companies have been wrestling with this conundrum and testing out new methods for protecting software and applications. While zero trust networking isn’t a new concept, it seems to have caught on in a big way only in the last year or two. As cloud migration and adoption has become ubiquitous and fewer companies manage the lion’s share of their own infrastructure, organizations started seeing zero trust as a better path forward. Security vendors have burst onto the scene with technologies built on zero trust, and though it’s a little buzzworthy at the moment (or part of the “hype cycle,” as Gartner would say), zero trust has legs.

Organizations should build strong fortifications around their internally-controlled networks to keep the bad guys out. #InfoSecInsider #infosec Click to Tweet

It's a mindset change

Jason Schmitt, CEO of Aporeto, says the reason behind the newfound attention to zero trust has to do with the fact that large amounts of companies’ workloads are now “running in someone else’s environment that [they] can’t control.” While many cloud providers have put significant amounts of work and dollars into securing the spaces that their customers buy or rent, Schmitt says that companies “can trust the cloud somewhat, but what happens when an organization moves data or applications into it? There are fewer and fewer people maintaining that environment, and so it’s inherently untrusted.”

However, regardless of where the data and applications reside, they need to be secured and the data owner needs assurance that the confidentiality, integrity, and availability of those data and apps isn’t compromised when running off premises. With the loss of control that accompanies cloud computing, security organizations need to find ways to manage network security outside of their own networks. They need the ability to scale security policies, says Schmitt, and manage permutations of that policy for it to remain effective. All of this, he adds, must be independent of the infrastructure in which the data and apps reside.

“It’s a mindset change” Schmitt says,” where whatever code I am deploying, I need to think about the entire process—how my data and software are secure, whether they’re on-premises, in the cloud, or traveling between the two. Policy control, strong authentication and authorization—they all factor into my decisions as a security professional now.”

Zero trust networking is less dependent on the specific network and infrastructure in use. It has to be. Operationalizing zero trust networking means moving away from the idea of more network configurations and instead focusing on wrapping protections around the communicating software. It means concentrating on the cryptographic identity of what’s communicating, how it’s communicating, and the behavior of its communications.

Breaking barriers

Not unsurprisingly, Aporeto, along with a few other emerging players, are focusing their attention on developing products with zero trust as the backbone. These companies are building/creating/offering distributed policy systems that keep communicating software more secure. It is a new twist on the old “place your protections around your data, not your network,” and it’s extremely applicable in this day and age of cloud, containerization, and app-everything. Schmitt says developers are top-of-mind when it comes usability. “We don’t want to require developers to change how they’re working. We don’t want them to change their apps to implement security, but we also know that security needs to be integrated early in the process.”

In effect, Aporeto and others like them are taking the onus off the development process and placing security responsibilities where they belong: with the security team. That said, applications are tricky and development and deployment cycles are often quicker than the person-to-person communication between developers/product managers and security/IT teams. This is why it’s important to have a zero trust mindset—because organizations can’t be certain that every app deployed or every piece of data moved into a cloud is perfectly secure just because it passed the barrier of the firewall, IDS, or endpoint detection.

GDPR, along with other risk and compliance topics, will be addressed during InfoSec World 2018, March 19-21, 2018. Check out the online agenda here.