Enterprises aren’t completely safe these days from a computing standpoint, but they are a lot safer than they used to be. I feel safe saying that if you’re willing to put money into the right products and into the right policies and people to enforce them, you can have an extremely secure network.
Enterprises aren’t public, though, and the users on them aren’t free. They work for the organization, have obligations to it and the network administrators have (or at least should have) authority to grant and deny access to various content, applications and network destinations.
The Internet and consumer networks are not like this. There’s nobody in charge, and the rules, such as they are, are often getting looser.
There aren’t exactly any laws against running malware on your computer. To an extent that malware may break the law if it attacks other computers, spams them, serves pornography to them or some such violation. But the real enforcement, such as it is, comes from ISPs. The malicious actions taken by infected computers, or deliberate actions of a malicious actor, are likely violations of the ISPs’ terms of service. This is why McColo was taken down; their upstream provider, or ISP, was informed in ways impossible to ignore, of their client’s actions, and so they cut off McColo.
Public discussion generally applauded the disconnection of McColo, especially since it resulted in a remarkably large cut in spam and other malicious activity. But mostly when I hear of ISPs taking actions to secure their networks against malicious applications I hear complaints about it.
One typical example is when ISPs block outbound port 25, except for their own mail server, for which they require SMTP-AUTH, a username and password. The ISPs do this in order to prevent spambots from sending spam, which the bots always do through port 25. Yes, in theory a spambot could gain your SMTP-AUTH credentials through monitoring or social engineering, but this isn’t how it typically works in the real world. In fact bots are beginning to get around these restrictions, but by using junk webmail accounts instead of SMTP accounts.
Users who need to use an external SMTP server should use TCP port 587 for the outbound mail. Message submission on 587 is designed to require authentication, so it is not useful to bots, and it is not typically blocked. External mail providers don’t do a good enough job of explaining this.
Nevertheless, I’ve seen more than one discussion of how this obviously is a conspiracy by [insert name of faceless national ISP] to force you to…well, something. SMTP restrictions like this are still not universal, even though they are obviously the right network management approach, because ISPs need to tread lightly in order not to anger customers.
In fact, anyone is allowed to run anything on the Internet. The McColo example is an extreme and atypical one. The Internet is full of systems running dangerous software, even dangerous legitimate software. A recent survey of random DNS servers on the Internet showed that a very high percentage of recursive servers did not protect against the “Kaminsky” cache poisoning bug revealed and patched this past summer. (The exact percentage is arguable based on how they measured it, but it seems to be between 23.99% and 44.51% based on how you read the report.) These servers are a real problem and they need to be addressed, but they won’t because nobody has the authority to do anything about it.
Should they? Vulnerable DNS software should be a reason for urgent action, but it’s not per-se a sign of malicious activity in violation of anyone’s terms of service with their ISP. But these servers are endangering the client systems they serve, probably leading to malicious activity through them. On managed networks it’s comparatively straightforward to address such problems; yes, the upgrade to the DNS may non-trivial, perhaps requiring new hardware, but you know you can do it. Out in the lawless Internet there’s nothing you can do.
This is why every now and then someone proposes genuine vigilante activity, such as breaking into compromised systems in order to patch them. A few “good worms” have even been written to apply patches. They have all been disasters, turning into Frankenstein monsters that cause far more problems than they solve. Fixing the problem in such an unplanned manner doesn’t work and certainly tramples on the rights of the users who it affects.
The freedom of the Internet is a wild west freedom, one where law and order are weak. The strong and well-protected survive, and innocent parties are violated all the time. A network cannot be secure unless administrators have authority both over applications and content. Thank goodness this is not the case on the Internet, at least outside of China. But we do pay a price for it. Usually, societies choose more law and order as they mature. Instead, on the Internet, we are all just becoming better-armed.
Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983.
For insights on security coverage around the Web, take a look at eWEEK.com Security Center Editor Larry Seltzer’s blog Cheap Hack.