The Massive Messaging Machine

By Larry Seltzer  |  Posted 2008-03-17 Print this article Print

Opinion: Security burden upon burden piled onto the messaging pipeline has increased processing demands, but in a way that benefits from high-end hardware.

I used to cover the CPU business, writing analysis of industry developments. Since the early '90s, when most of the major RISC architectures began to gel, the main creative task of processor designers has been to inject more parallelization into design.

First there was superscalar execution, in which multiple execution units allow more than one instructions to execute at a time. Eventually multiple processors and now multiple cores allowed logically separate threads of execution to run in parallel.

The idea that running more than one code stream at a time speeds up the overall program is a great theory, but it always seems to run into limitations in real-world applications. The designers-the people who think about time in terms of nanoseconds-always seemed to think that the software people weren't living up to their end of the bargain, since the big market apps were so bloody single-threaded.

Some think that virtualization is the answer to this problem, but there's nothing like an application that itself is massively parallel to help you get your money's worth out of expensive hardware.

Messaging might be it. These days, the number of tasks performed in the messaging pipeline is so large and the complexity of those tasks so great that a busy message stream in an enterprise can saturate the processors handling it. The old notion that the Internet pipe is going to be the gating factor anyway doesn't necessarily hold true.

I was talking to Sendmail, the people who invented the mail server, about its new Sentrion appliances. Boxes dedicated to message processing are not a new thing, but these boxes do so much it makes you think.

Think, for example, of all the things that must happen to a message either inbound or outbound. There is the basic transport protocol, the SMTP commands. The message itself may be encrypted, so there is encryption and decryption. The message may be signed. The message needs to be scanned for malware, for phishing, for malicious HTML. The sender of the message may be authenticated through DKIM or Sender ID, and their reputation evaluated. Regulatory compliance rules must be enforced. Company policies about the message must be enforced. Think also that multiple messages might be processed at once-sort of like superscalar processing for messages.

Do all these things in one process and you diminish the chance that something will slip through. Do it all in one process and you also need a parallel performance monster that can justify a large budget and a state-of-the-art architecture.

There are some other Internet security apps that can be massively parallel, but I don't think any of them compare to messaging. Messaging is so core and critical to businesses these days that it has to be done right and it has to be done with reasonable performance.

So bring on the CPU cores, throw memory and disk arrays at them and don't skimp. While you're at it, add some extra redundancy for reliability purposes. Your mail volume and the number of problems in that mail are only going to increase.

Security Center Editor Larry Seltzer has worked in and written about the computer industry since 1983.

For insights on security coverage around the Web, take a look at Security Center Editor Larry Seltzer's blog Cheap Hack.

Larry Seltzer has been writing software for and English about computers ever since—,much to his own amazement—,he graduated from the University of Pennsylvania in 1983.

He was one of the authors of NPL and NPL-R, fourth-generation languages for microcomputers by the now-defunct DeskTop Software Corporation. (Larry is sad to find absolutely no hits on any of these +products on Google.) His work at Desktop Software included programming the UCSD p-System, a virtual machine-based operating system with portable binaries that pre-dated Java by more than 10 years.

For several years, he wrote corporate software for Mathematica Policy Research (they're still in business!) and Chase Econometrics (not so lucky) before being forcibly thrown into the consulting market. He bummed around the Philadelphia consulting and contract-programming scenes for a year or two before taking a job at NSTL (National Software Testing Labs) developing product tests and managing contract testing for the computer industry, governments and publication.

In 1991 Larry moved to Massachusetts to become Technical Director of PC Week Labs (now eWeek Labs). He moved within Ziff Davis to New York in 1994 to run testing at Windows Sources. In 1995, he became Technical Director for Internet product testing at PC Magazine and stayed there till 1998.

Since then, he has been writing for numerous other publications, including Fortune Small Business, Windows 2000 Magazine (now Windows and .NET Magazine), ZDNet and Sam Whitmore's Media Survey.

Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters

Rocket Fuel