Can your organizations IT stack stand up to the burdens placed upon it? Do the components in your IT stack provide the best possible performance? Do you have a choice? Its time to find out.
There are all kinds of stacks out there, from network stacks to code stacks. But in recent years, the stacks that have been getting the most attention are those that are referred to—somewhat broadly—as IT stacks. Essentially, an IT stack consists of a server operating system, a Web server, a database, and a scripting or development language.
Of course, IT stacks deserve all the attention they get. After all, that grouping of applications is the core base that most Web-based enterprise applications run on—from portals to enterprise content management systems to CRM (customer relationship management) and ERP (enterprise resource planning) platforms.
Further, as companies move more aggressively into SOAs (service-oriented architectures) and other service-based systems, their IT stacks will play huge roles in determining ongoing service strategies.
Probably the two best-known stacks are Microsofts .Net and the open-source LAMP.
The .Net stack typically consists of a Windows Server operating system, the IIS (Internet Information Services) Web server, SQL Server database and Active Server Pages scripting language. The LAMP stack comprises the Linux server operating system, Apache Web server, MySQL database and one of the three “P” scripting languages (PHP, Python or Perl).
After these two stacks, the biggest—especially in enterprises—is the J2EE (Java 2 Platform, Enterprise Edition) stack. This stack is pretty flexible in terms of its components, but there is one constant: The development language has to be JSP (JavaServer Pages).
Of course, these arent the only three IT stacks out there. When you start mixing and matching applications, and introducing applications we havent even talked about, the choices are almost endless.
However, typically an IT stack isnt chosen based on the quality of the applications therein but on issues such as history (“Weve always been a Linux/Unix/Windows shop”), internal skill sets (“Our developers only know ASP/JSP/PHP”) or end products (“We really want to run Product X, which is .Net/Linux/Java”).
But what about the stacks themselves? How much does the choice of stack affect performance? Do stacks need to be pure in their configuration, or can a business get solid performance by mixing and matching among multiple stacks?
These are some of the questions eWeek Labs set out to answer a few months ago, when we began a series of tests to evaluate the makeup, performance and scalability of enterprise IT stacks.
We performed a series of load tests against eight mixes of IT stacks (admittedly, barely scratching the surface of potential stacks). These consisted of pure LAMP stacks, a pure .Net stack, J2EE on both Windows and Linux, and what we will refer to as a WAMP stack—basically, open-source components running on a Windows server.
Our tests show that all of the stacks perform well enough to handle most enterprise needs. Some did better than others, but no one was a leader in all categories. (Benchmark charts start at right.)
But there were some results that may prove surprising. Mix-and-match stacks tended to do fairly well in our tests—especially the stacks that took a nonstandard route when it came to the database.
Probably most surprising was the solid performance that came from the stacks that contained a mix of a Windows server and open-source components. Traditionally, these kinds of WAMP setups have been considered suitable only for development and testing purposes, not for production systems. But, based on the performance we saw in our tests, businesses should seriously consider the combo for their enterprise applications.
Thats not to say that pure-play Microsoft isnt a good bet: Microsofts .Net stack performed very well in our tests, clearly showing the benefits of the tight integration among each of the stack components.
We hope our tests provide some perspective, but, more than anything else, we hope they inspire IT managers to perform the same kinds of tests themselves. No tests done in a third-party lab can tell you how a specific combination of servers and applications will run under your business-specific requirements and systems.
Our tests were labor- and time-intensive, but there was nothing too unusual about the equipment involved. Probably the biggest expense would be in finding a performance testing application to use, although, there are free, open-source testing tools out there that are capable of doing the job.
Next Page: Putting IT stacks to the test.
TKTK
Testing the IT stacks
eWeek labs had several goals in mind when we started our tests of IT stacks.
First, we werent interested in doing an unrealistic stress test designed to see which IT stack broke first. What we wanted was to run each stack under a heavy but realistic and consistent level of traffic to get practical results that could be applied to most organizations computing environments.
To test a Web-facing platform like an IT stack, we needed a subject application to test with. We wanted to avoid the clean-room-like environment in which these kinds of tests are often run, so instead of building a test application and then porting it to the languages used in the evaluation, we decided to use real-world applications.
Specifically, we chose to use portal applications because they exist in pretty much every scripting language and we could create almost the exact same test script in each one.
We used portals we consider popular—Microsoft SharePoint Portal Server 2003 (built on ASP), XOOPS (PHP), Plone (Python), and Liferay and JBoss Portal (JSP).
On the server side, our test systems were AMD Opteron-based servers with SATA (Serial ATA) RAID drives and 2GB of RAM. A separate system was configured for each database.
Virtual load test clients were generated by an AMD Athlon 64-based workstation running Windows XP. Everything ran on a Gigabit Ethernet network in eWeek Labs. Each test was run multiple times to avoid test discrepancies and outlier results.
We considered several different tools for performing the load tests, including the open-source OpenSTA (see below). We looked hard at OpenSTA, as it would have made it much easier to share our test scripts and methodologies. However, while OpenSTA had all the requisite capabilities, its configuration and reporting limitations would have added to our testing time.
We eventually decided to use Borlands SilkPerformer (formerly from Segue Software) to handle the actual test and reporting management. During the course of each approximately hourlong test, SilkPerformer ran 1,000 virtual clients against the stack applications.
To test the IT stacks, we recorded a script doing basic tasks that could be repeated in every one of the portals. The tasks included loading an identical page from each portal, loading a members page and general portal surfing. We opted to use open, rather than user-authenticated, pages because we did not want the process to turn into a test of authentication systems.
Among the many results generated from the tests, the ones we have chosen to publish focus on performance averages. These include average transactions per second, average throughput per second, average hits per second, average page download time and average document download time.
We should point out that the last two test averages include server busy time, as well as page download and document download attempts when the servers were more heavily loaded. The averages are a good barometer of how a stack setup will handle a long and heavy load, but they dont represent the actual time it will take to download a page or document from these servers.
Like any large-scale benchmarking test, these IT stack tests are not without their weaknesses.
One could argue that we were testing the subject apps—the portals—and not the stacks themselves. Of course, one could also argue that tests using a custom-built subject application would be less a test of the stacks than a test of the porting skills of the programmers.
In addition, many of the platforms that we tested arent designed to run straight to users. For example, it is recommended that Plone be run behind clusters or Apache proxies for production environments.
In reality, all these products would run in heavily optimized environments in an enterprise. But the point was to test the stacks, not their ideal performance points, which is also why we didnt tune or optimize any of the systems but ran them as close to default as possible.
The criticism we expect to hear most is of the stacks we left out—including commercial J2EE platforms, such as those available from BEA Systems, IBM, Oracle and Sun Microsystems, as well as the many other database and server platform permutations.
Hopefully, over time we—and readers who perform these tests and share their results—can address this last potential criticism. We plan to update our tests at blog.eweek.com/blogs/eweek_labs, and we invite you to also share your results there.
Next Page: Taking a closer look at LAMP, Linux J2EE, Linux JBoss, Linux Python, .Net.
TKTK
LAMP
The test we did that was closest to a pure LAMP stack ran on SUSE Enterprise Linux, Apache, MySQL and the XOOPS portal and content management system. We chose XOOPS because of its general popularity and high ranking among PHP-based portals on sourceforge.net.
In nearly every test we ran, this PHP-based LAMP configuration was a solid, middle-of-the-road performer.
For example, we saw average throughput of 120.59KB and average hits per second of 24.15. Given that this was a pure-vanilla implementation with no tuning, these numbers are actually more impressive than they seem at first. Even the most ardent PHP fans will admit that PHP is not designed with performance in mind and will usually recommend clustering or performance add-ons such as those available from PHP vendor Zend Technologies.
This stacks performance numbers suggest what many who have been using PHP for some time now (including some of the busiest blogs on the Web) know to be true—that a pure LAMP-based PHP system can easily handle enterprise-class traffic and loads.
Linux J2EE
We ran the liferay portal system on the Linux J2EE stack because of its popularity as a Java-based portal system and because of its somewhat unusual base configuration.
Liferay uses the Hypersonic SQL database engine (now the H2 Database), a Java-built database specifically designed to be very fast in Web environments. We ran Liferay on an Apache and Tomcat server infrastructure running on Cent-OS Linux.
Perhaps somewhat surprisingly, this configuration was among the best performers in our tests, with an excellent average throughput of 1.56M bps and the best average hits per second, 234.81.
To a large degree, we credit this outstanding performance to the lightweight Hypersonic SQL database. For years, small, focused databases have done very well in standard transactional tests similar to our performance tests. An interesting follow-up to these tests would be to run more intensive database tasks against this system.
Still, these results are very promising to those interested in mixing and matching in their stacks. Businesses evaluating IT stacks, especially for running simple transactional applications, should definitely consider a stack that uses a nonstandard but potentially high-performance database.
Linux JBoss
for our tests of this stack, we decided to include JBoss Portal. JBoss is one of the top open-source Java application server vendors, but JBoss Portal is relatively immature compared with the other portal applications in this evaluation, which have been around for years and have seen a lot of real-world testing and refinement.
That immaturity is perhaps why this stack performed relatively poorly. We ran JBoss on CentOS with MySQL, and, while the stack wasnt the worst performer in all tests, it always tended toward the bottom of the scale. (In JBoss defense, JBoss Portal on Windows performed considerably better than JBoss on this CentOS Linux setup.)
We think JBoss Portal will work out most of its kinks across platforms in time, just as its app server sibling has.
It would be worth it to run JBoss on this setup with additional tuning, and also to run it with other databases, including the Hypersonic SQL database that performed so well in the Liferay-based stack test.
Linux Python
One potential iteration of the LAMP stack has Python in the “P” position.
The most popular core applications in the Python stack—namely, the Zope application server and the Plone portal and content management system—are not only agnostic when it comes to the rest of the stack but actually tend to replace stack components with their own systems. For our tests, we ran what is essentially a pure Zope/Plone implementation, with Plone running on a SUSE Enterprise Linux system.
In some benchmarks, Plone was an average performer, sticking close to the middle. This is actually better than we expected, given that the Plone documentation is very upfront about the fact that Plone shouldnt be used alone in a production environment and should be run behind other servers to improve performance.
We saw why in the average-transactions-per-second and average-download numbers, where Plone on SUSE was among the slowest of the systems we tested (with an average page download time of 156.22 seconds and an average document download time of 75.94 seconds).
Most of these numbers came from the end of the test, when Plone started showing the stress of the high loads it was enduring. Still, in an internal server setting, a naked Python-based stack using Plone would most likely run well, based on the numbers we saw.
.Net
A few years ago, microsoft threw around the .Net moniker so aggressively in so many areas that it became difficult to figure out exactly what the term meant. But, as all the irrational exuberance that comes with a failed marketing blitz finally pulled back, .Net went back to being what it was originally intended to be: the name of Microsofts server and service stack.
To test the .Net stack, we ran Windows Server 2003 R2, SQL Server 2005 and SharePoint Portal Server 2003. Across the board, this configuration performed very well, with the top overall average throughput (by far) at 4.59M bps.
To a large degree, we credit this strong showing to the high level of integration that exists among the components of this stack. While most of the open-source and Java systems are developed independently of each other, each of the .Net components is designed specifically to integrate and perform well together.
Even if the .Net stack had bombed convincingly in these tests, it would probably still maintain popularity in many companies. But its strong showing should give companies confidence that the .Net stack will handle most high-level enterprise needs.
Next Page: Mixing Windows and open source.
TKTK
Windows and Open Source
Talk about your Romeo and Juliet: Neither the open-source nor the Windows communities seem to be able to accept a marriage of open-source server components and Windows operating systems.
IT managers trying to use open-source applications on Windows systems are often seen as cheapskates who should upgrade to the “real” Windows products. And many in the open-source community see WAMP setups as being fine for development or testing, but sadly lacking for real production environments and requiring a step up to “real” Linux and Unix implementations.
The tide may be turning, though.
Based on our forays into user forums for many top open-source enterprise applications, there are many IT managers attempting to run open-source products on Windows servers—attracted, no doubt, to the benefits and efficiencies of using open source without having to become Linux administrators.
The results of our WAMP stack tests indicate that these folks might be on to something. Our WAMP stack setups included Windows Server 2003, Apache, MySQL and the PHP-based XOOPS; Plone running on Windows Server 2003 R2; and JBoss and MySQL on Windows Server 2003.
All three of these systems were among the leaders in average transactions per second, with JBoss on Windows far outpacing its Linux brethren, with 16.79 average transactions per second. These implementations also did very well in the download tests. Where they were mediocre were in average hits per second and in average throughput.
Still, during the entire length of each test, none of these systems choked. Like the turtle in the fable, they moved somewhat slowly but they were steady. They never excelled, but they never broke down.
The results we saw with the WAMP stacks were probably the biggest surprise in our entire test. Enterprise IT managers shouldnt hesitate to look into the option of deploying open-source stacks on a Windows Server platform.
For some businesses, this will truly be the best of both worlds.
Contact Labs Director Jim Rapoza at jim_rapoza@ziffdavis.com.
Check out eWEEK.coms for the latest news, reviews and analysis in programming environments and developer tools.