Despite the rising tide of cyber-attacks and malware, organizations are not proactively testing their infrastructure to find and address vulnerabilities, according to a security expert.
The National Institute of Standards and Technology recommendations require government agencies to deploy some kind of continuous network monitoring practices to defend their environments against security threats. The recommended strategies include both periodic security audits to assess risk and proactive penetration testing, Mike Yaffe, government security strategist at Core Security, told eWEEK.
Despite the name “continuous,” Yaffe does not mean that organizations have to go out and deploy a real-time system to keep track of everything that’s happening in the network. The frequency of testing depends a lot on the organization’s risk tolerance and industry, according to Yaffe. Highly regulated environments such as financial services may need real-time monitoring to detect fraudulent transactions, but other organizations, such as a school, may be able to get away with a thorough test once a quarter, Yaffe said.
“The goal is to do more, more frequently,” Yaffe said.
Customers hear continuous monitoring and think they have to invest in yet another security platform, but Yaffe said that just adds more information to the enterprise that needs to be analyzed. Getting new logging tools just means the employees have to spend more time sifting through the information, and it’s more reactive. Organizations need to be actively testing to see whether there are any paths in the Website or in the network that attackers can use to come in and not spending all their time looking and waiting for attacks to happen, according to Yaffe.
“They don’t want to run aggressive tests because they don’t want to know they aren’t perfect,” Yaffe said.
There should be regular penetration tests to check for flaws in the company’s Website or if an employee clicks on a malicious link. Escalation of privilege is a big problem and organizations needs to know what would happen once the attacker makes it inside, Yaffe said.
There is no such thing as perfect security, and there’s no technology that’s a “silver bullet,” Yaffe said. If hackers are motivated to come in, they will, but organizations should be trying to make it as difficult as possible for the attackers to succeed.
Companies shouldn’t be looking for “security nirvana,” Yaffe said. “In the analogy where you are being chased by a bear, you just want to outrun the other guy,” Yaffe said.
The unwillingness to commit to regular testing is not just prevalent among senior executives or within IT departments, but is a “pervasive attitude,” Yaffe said. There is a serious reluctance to take the server offline or to shut down business services for a maintenance testing window, Yaffe said. There’s a sense that if they haven’t been hit yet, haven’t gone offline, then it’s not a problem.
The emphasis on uptime also leads to testing reluctance. Uptime is a key focus for organizations, and availability is important for both customer-facing services and internal applications. Even so, organizations need to be “tolerant” of having some inconvenience since it will result in a “significant boost” in security, Yaffe said.
Instead of waiting around to be hacked, which is just a form of “unsafe, uncontrolled and inefficient” penetration testing, organizations need to have “the will” to find out where their weaknesses are. The best CSO (chief security officer) is the person who wants to know where the holes are, the one who tells the security team that their job is to tell him what is going on instead of saying everything is under control, according to Yaffe.
There is a lot of legislative momentum in favor of continuous monitoring, where companies are required to maintain “ongoing situational awareness” of their network, but there are no guidelines as to how organizations actually achieve that level of awareness, according to Yaffe.
For example, PCI (Payment Card Industry) security regulations require that organizations have to regularly test their infrastructure for security vulnerabilities. The risk framework also specifies that IT departments have to find vulnerabilities, assess the impact of the threat, and estimate the likelihood of an exploit occurring. “How would you know that if you don’t test regularly?” Yaffe said.