IBM and France's GENCI high performance computing agency team up to bring supercomputing closer to exascale.
There was steady growth even as the global economy, data center transitions and shrinking midrange demand pressured the space, IDC and Gartner say.
At the Hot Chips show, Oracle officials reportedly unveiled plans for an upcoming SPARC chip that will be aimed at scale-out environments.
The DSS business builds on the DCS program Dell launched eight years ago for hyperscale companies like Google, Facebook, Microsoft and Baidu.
IBM announced new Linux-only mainframe systems to further tap into the fastest-growing operating system in the industry.
The chip company may be interested in incorporating the programmable processors into its growing semi-custom chip business.
Conventional interconnected IT environments—whether virtualized, cloud-enabled or neither—leave organizations more vulnerable to data breaches than ever before. Why is this the case? With increasing numbers of mobile users and virtual workloads, more application programming interface (API) integrations, rich partner and cloud interconnections, and rapid application adoption, it is no longer possible to rely on zone-based perimeter security. Attack surfaces are increasing. The armored-car approach—bullet-proofing the central server/networking/storage complex—simply isn't doing the job. Even new techniques like micro-segmentation, which divides a network into smaller zones and provides protection by making security adaptive and multilayered, are unproven. What's a data center manager to do? Objective self-evaluation is necessary. eWEEK, using resources that include our own archives, information from Forrester Research and industry insight from Skyport Systems, discusses in this slide show the most common reasons why servers and data itself are still as vulnerable as ever.
The chip maker unveiled some details of its upcoming Exascale Heterogeneous Processor in a paper submitted to the IEEE.
The president's executive order creates a federal body to coordinate government efforts to push development of the first exascale computer.
The company is developing a technology that will measure the power used by applications at the processor core level.
NEWS ANALYSIS: 3D XPoint is an entirely different architecture that ostensibly will improve data movement by orders of magnitude.
The systems in the vendor's Cool-Central Liquid Cooling Solutions will cut energy costs in half and increase density up to five times, officials say.
The giant tech vendor's agreement to buy wind energy over 12 years will help HP beat its greenhouse gas reduction goals by five years.
Although Oracle has its own Linux operating system, it continues to push forward on its Solaris Unix OS. Oracle recently rolled out a beta preview release of the next-generation Solaris 11.3, which builds on improvements and innovations that Oracle has been developing since the Solaris 11 release in November 2011. The Solaris 11.1 debuted in October 2012 and provided incremental updates to the Unix platform. Thedebuted in July 2014, included an integrated OpenStack Havana cloud distribution. In Solaris 11.3, Oracle is updating the OpenStack distribution to the Juno cloud milestone. While the cloud is a key focus in all Solaris 11.x releases, so too is file system performance with Oracle's ZFS, or Zettabyte File System. In Solaris 11.3, ZFS is enhanced with LZ4 compression support to further boost storage capabilities. While Solaris can run on both x86 and on Oracle's Sparc silicon, only Sparc users will benefit from Solaris 11.3's new application data integrity (ADI) feature. ADI works with the SPARC and can help detect common memory errors. Take a look at key features in Oracle's Solaris 11.3.
In the current world of supercomputers, the more things change, the more they stay the same. For the fifth consecutive time, China's massive Tianhe-2 (also known as Milky Way-2) system topped the Top500 list of the world's fastest supercomputers, essentially doubling the performance capabilities of Titan, a U.S.-based system that came in second. In fact, the roster of the top 10 fastest systems looks familiar, with only one new system—Shaheen II, at the King Abdullah University of Science and Technology in Saudi Arabia—breaking into the ranks since the twice-yearly list was last released in November 2014. The organizers of the Top500 list—released July 13 at the International Supercomputing Conference in Germany—noted several trends within the high-performance computing space. While the total combined performance of all 500 systems on the list grew from 309 petaflops in November to 361 petaflops, it was a slowdown from the growth seen in previous long-term trends. In addition, outside of Shaheen II, the other systems in the top 10 were installed in either 2011 or 2012, continuing a slowing trend in turnover among the top systems that began in 2008, organizers said.