Sun and SAS use two servers to process 3.9 terabytes of sustained data throughput in 1 hour.
Sun Microsystems and SAS AG jointly trumpeted May 17 what they described as a "world record for data integration" into a business intelligence warehousethe extraction, transformation and loading of 3.9 terabytes of sustained data throughput in 1 hour using two servers.
The companies utilized a Sun Fire E25K server running 48 1.5GHz UltraSPARC IV+ dual-core processors on the Solaris 10 operating system, a Sun StorEdge file-sharing system and the SAS Enterprise Data Integration Server to process 3.9TB of data in 1 hourwhich corresponded to a raw performance of 81.2GB throughput per hour, per CPU.
Sun merges its SPARC and x86 server groups. Click here to read more.
The StorageTek 3510FC arrays were used to provide high read/write storage bandwidth and connectivity to multiple host domains, and Solaris 10 OS provided high performanceeven at 100 percent system utilization, Sun said.
The Sun Fire server can pack up to 72 dual-core UltraSPARC-IV+ processors, 576GB of RAM and 72 I/O channels in a shared memory (SMP) architecture. The hardware partitioning capability of the Sun Fire serverwhen combined with the Solaris Containerscreated a customized compute node used to run the large-scale compute task.
In its statement after the performance benchmark, SAS, headquartered in Cary, N.C., claimed its DI server would "outperform any other vendors tools by more than 250 percent," since the previous benchmark had been 1.62TB of data throughput in 1 hour.
The demonstration was one in a series of announcements from SAS at its annual European showcase, the SAS Forum International in Geneva, where the company was showcasing its latest software and services, and putting a major emphasis on performance.
Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses.