Stupid Computer Security Myths, 'Dumb Ideas' Keep Enterprises at Risk

 
 
By Fahmida Y. Rashid  |  Posted 2011-11-13 Email Print this article Print
 
 
 
 
 
 
 

Software developers and enterprise IT departments are making computer security blunders every day because of myths, misconceptions and just plain "dumb ideas."

Despite a growing awareness of security threats and basic security measures necessary to secure networks and data, many misconceptions and myths keep computer users and enterprises at risk, a security consultant said at a Kaspersky Lab event in New York City.

While organizations are deploying firewalls, public key cryptography and complying with various security and privacy regulations, many of them are still hanging on to certain misperceptions, "falsehoods" and approaches that don't work, Charles Pfleeger, a security constant and principal of the Pfleeger Consulting Group, said in a keynote speech on Nov. 10 at American Cup 2012. Kaspersky Lab's educational event was jointly held with NYU-Polytechnic University in New York City. Some of the "dumb ideas" were myths held by nonsecurity professionals, and others were attitudes still circulating within the security community, Pfleeger said.

In 2005, Marcus Ranum, a chief of security for Tenable Security, published six "dumbest ideas in computer security," including the idea that hacking is cool and just patching flaws found in software products will make it more secure. While many of Ranum's points were valid, Pfleeger developed his own list of security mistakes that are made every day in organizations around the world. The security mistakes are generally the result of ignorance and limited time available to address issues, he said.

"There are a lot of dumb ideas," Pfleeger said.

The first one was the idea that organizations can retrofit security, Pfleeger said. Often, when an organization is acquiring another company, or designing and developing a product, if the security question comes up, the most common attitude is, "We will do security later," Pfleeger said.

As systems grow increasingly complex, it becomes harder to retrofit software with effective security features, and even if it is possible, it's never as effective as if it had been part of the product from the beginning, according to Pfleeger. He also said "penetrate and patch" doesn't work, echoing Ranum's previous list, because it just means organizations are just plugging existing holes and not addressing the overall security issues.

He used a construction analogy, noting that if a contractor is asked to build a house, but directed to make sure the house has electricity only after the walls and roof were complete, it would still be possible, but it wouldn't be as effective and would require finished walls and ceilings to be to be broken open and refinished.

Microsoft had to do just that a few years ago. Bill Gates was able to say, "Stop everything. We are going to go back and put security in," Pfleeger said. Since then, Microsoft has implemented a methodology that integrates security into every step of development and builds security in by design. Bill Gates was able to do go back and start afresh, but not everyone can do so, said Pfleeger.



 
 
 
 
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel