The surge of ransomware, malware, and other cybercrime has reached crisis proportions. This summer has already seen the following ransomware attacks:
- Colonial Pipeline and on JBS SA, the world’s largest meat producer.
- Joint warning by U.S. and U.K. intelligence agencies of global campaign of brute-force attacks by the Fancy Bear unit of the Russian military.
- The adoption by the REvil gang of sophisticated zero-day exploits once reserved to nation-states.
To CIOs, it might seem like cybercriminals can strike at will—and succeed all too easily.
In response to the rising threat, cybersecurity professionals have embraced a new consensus around Zero Trust—an approach to defense predicated on the understanding that a cyberthreat can originate anywhere outside or inside the traditional network perimeter, from malicious insiders to criminal gangs and nation-states.
Therefore, no users, devices and traffic should be trusted, and they should be subjected to regular security checks and scrutiny. Indeed, the executive order on improving the nation’s cybersecurity issued by the White House in July gives prominent mention to this strategy.
By redesigning cyberdefense along the principles of least-privilege access, network micro-segmentation, rapid incident detection and response, and comprehensive security integration, organizations can prevent most attacks and minimize the impact of those that do slip through.
So: problem solved? Not exactly. While Zero Trust undoubtedly represents an important advance and merits broad adoption, it isn’t magic—and it’s not foolproof. In fact, in most definitions of the model, there’s an inherent blind spot: the assumption of complete visibility into network traffic to ensure that it doesn’t pose a risk.
As it happens, the vast majority of traffic across the Internet is encrypted with SSL or TLS—rendering it invisible to legacy security devices and impervious to a Zero Trust strategy.
What Zero Trust Might Miss
As a foundation of online communication, encryption has been a boon for data protection and privacy, but its implications for security have been more problematic.
On one hand, encryption can be highly effective for preventing spoofing, man-in-the-middle attacks, and other common exploits. On the other hand, you can’t monitor, filter, or analyze what you can’t see—so any ransomware or malware hiding within encrypted Internet traffic will go undetected by your security stack.
And once it has entered the environment, nearly half of malware now uses TLS to establish a connection and communicate with command and control servers, making it impossible for the victim to track or stop an attack in progress.
Of course, security vendors and many CIOs are well aware of the challenges posed by SSL and TLS encryption for cybersecurity. In response, SSL and TLS decryption has become a common feature of many security devices. Like a TSA agent rooting through a carry-on bag, these security devices intercept and decrypt incoming or outgoing traffic, inspect it, and then re-encrypt it before sending it on its way.
Unfortunately, the process tends to move about as quickly as that security line at the airport, especially when the devices in question weren’t designed to handle encryption as a primary function and lack the essential hardware required. The process also needs to be repeated over and over for each element in the security stack, adding lag with every added hop.
This can significantly degrade the performance of security devices while increasing network latency, bottlenecks, cost, and complexity. And this impact is multiplied as each successive security component—DLP, antivirus, firewall, IPS, and IDS—decrypts and re-encrypts traffic in turn for its own purposes.
Beyond compromising service quality for business users and customers—a bad enough problem in its own right—distributed, round-robin decryption, inspection, and re-encryption across the security stack undermines the simplicity at the core of the Zero Trust model.
By deploying private encryption keys in multiple locations across the multi-vendor, multi-device security infrastructure, organizations inevitably expand the attack surface, increasing risk at the same time they’re trying to reduce it.
Bringing Complete Visibility to Zero Trust
The premise of Zero Trust is sound. So is the value of SSL and TLS encryption. What’s needed is a way for them to co-exist within the same architecture while meeting the performance requirements of modern business.
The solution to the Zero Trust blind spot is simple—in fact, it hinges on simplicity. Rather than performing decryption, inspection, and re-encryption on a device-by-device or per-hop basis, organizations should centralize this function and use a single, dedicated, high-performance SSL and TLS inspection element to support their entire security stack at once.
In this way, traffic can be decrypted once, inspected by any number of separate security devices in tandem, and then re-encrypted before continuing on its way into or out of the environment. While there may still be a small impact on performance, it is only a fraction of the compounding effect of the round-robin approach—and a small price to pay for the comprehensive visibility needed for effective Zero Trust.
The current cybercrime wave calls for an urgent response by CIOs and CISOs. Zero Trust offers a way to strengthen security across today’s highly distributed environments, porous enterprise networks, and work-from-anywhere workforce—but only if its model can be implemented fully without sacrificing service quality.
Too often, the need for comprehensive traffic inspection forces organizations into an impossible tradeoff between security and performance. But by taking a centralized approach to SSL and TLS inspection, CIOs and CISOs can achieve the protection their business demands—as well as the performance their customers expect.
About the Author:
Babur Khan, Technical Marketing Engineer, A10 Networks