Intel officials a year ago kicked off what they called the Intel Cloud for All strategy, an effort designed to accelerate the adoption of cloud computing by businesses that includes not only the chip maker’s silicon products for cloud environments, but also helping to create the ecosystem around cloud infrastructures and open standards.
At the time, the officials said that while businesses want to embrace the cloud, the technology is too complex to implement on a large scale, which has been a drag on cloud adoption by the enterprise.
At an event in San Francisco March 31, Diane Bryant, senior vice president and general manager of Intel’s Data Center Group, introduced the latest generation of Xeon E5 server and workstation processors that she and other officials said has the features and enhancements that will enable businesses to more easily adopt cloud computing and manage the processing demand coming from the fast-growing mobile computing space.
At the same time, the world’s largest chip maker also introduced new solid-state drives—including the company’s first 3D NAND drives that are optimized for cloud and enterprise workloads—and announced collaborations with a broad array of cloud software and solutions providers and new industry programs aimed at making it easier for enterprises to run their workloads in the cloud. Among the partners are VMware, CoreOS, Mirantis and the Cloud Native Computing Foundation, a Linux Foundation project aimed at creating computing optimized for distributed system environments.
“For today’s enterprises, competitiveness increasingly hinges on IT agility,” Jennifer Huffstetler, director of data center product marketing, wrote in a post on the company blog. “In simple terms, a digitally driven business—which is now virtually all businesses—can move no faster than its IT infrastructure. This new reality for the enterprise is accelerating the shift to public, private, and hybrid clouds enabled by software defined infrastructure (SDI).”
According to company officials, about half of all applications now are delivered through the cloud, though that will increase to about 85 percent by 2020. In addition, about 66 percent of cloud demand comes from consumer services from top cloud service providers. Enterprises and service providers are pushing to more quickly grow their cloud computing capabilities.
However, for many of these businesses, getting to the cloud can be difficult, hindered by complexity, fragmentation of solution stacks and a lack of key features, Huffstetler wrote. The company has been trying to address the concerns through the Cloud for All initiative and its Cloud Builders program. The new products announced in San Francisco are more steps in that direction, and at the center of it are the new Xeon E5-2600 v4 “Broadwell” processors.
The new processors—27 in all—offer performance and efficiency gains over the current “Haswell” Xeon E5 chips. They’re smaller—the Broadwell chips are 14-nanometer, while Haswell is 22nm—come with up to 22 cores, compared with the up to 18 cores in Haswell, and more threads per core—up to 44 compared with up to 36. On the transistor count, the most powerful of the Broadwell chips is 7.2 billion transistors, compared with 5.6 billion in its Haswell counterpart.
The new chips offer up to 44 percent better performance, 24 percent faster message rate due to the use of Intel’s Omni-Path architecture (which will come later in the year), improved server utilization and better virtualization performance, including reducing virtual machine (VM) downtime by as much as 70 percent.
The new processors, designed for scale-out environments, will help create the SDI that will offer the flexibility, agility and scalability needed to drive adoption of the cloud, Intel officials said during a workshop with journalists earlier this month. Among the new features is Intel’s Resource Director Technology for improved orchestration and utilization inside the data center. The technology allows for fully-automated clouds and better control over share resources, such as processor caches and main memory, enhancing the agility and scalability of the infrastructure.
There is 20 percent more cache in the new chips, which also support faster memory, and include new security features for workload isolation, faster cryptography with ADOX and ADCX technologies, and security policy enforcement. They range in core counts from four to 22, speeds between 1.7GHz and 3.5GHz and power envelopes between 55 watts and 160W.
Intel Looks to Drive Cloud Adoption with New Xeon Server Chips
The new SSDs—the DC P3320 and P3520 Series for scale-out environments—are optimized for the Xeon E5-2660 v4 chips and are the first SSDs from Intel to use the high-density 3D NAND technology for more efficient and dense storage. The P3320 offers up to five times the performance than SATA-based SSDs. In addition, Intel’s D3700 and D3600 Series SSDs—for scale-up infrastructures—are the company’s first to include dual-port PCIe using the Non-Volatile memory Express (NVMe) protocol. The D3700 SSDs offer up to six times the performance of current dual-port SAS offerings.
“We’ve essentially stripped out as much latency for the storage stack that you possibly can,” Greg Matson, director of data center product marketing and strategic planning for Intel’s NVM Solutions Group, said during the workshop earlier this month.
An array of server makers—including Hewlett Packard Enterprise, Dell, Lenovo and Supermicro—are offering new and enhanced systems based on the Xeon E5-2660 v4 chips.
Patrick Moorhead, principal analyst with Moor Insights and Strategy, said the performance and efficiency gains in the new chips are impressive. However, what is even more impressive is Intel’s efforts to build broad ecosystems around its architecture through efforts like Cloud Builders and the new Storage Builders groups, Moorhead told eWEEK. Intel dominates the server space, with more than 90 percent of the market. However, particularly in the cloud space, other competitors are trying to gain more traction, including IBM’s OpenPower effort and ARM chip makers like Qualcomm, which is developing a 24-core system-on-a-chip (SoC) and appears to be getting the support of Google.
Developing ecosystems around the Intel Architecture not only will bring more support, but also give Intel another weapon as it pushes back against rivals.
“Competitively, it gives you an advantage even if someone comes out with a better widget,” Moorhead said. “They’ve built an ecosystem that’s going to be hard for someone else to keep up with.”
In addition, Intel is aggressively building out its capabilities in the data center beyond the processors themselves and into everything from memory, storage and networking to accelerators like GPUs and field-programmable gate arrays (FPGAs).
“This is a platform play that we’re seeing,” the analyst said. “It’s very smart.”
The ecosystem efforts include collaborating with CoreOS and Mirantis to use open-source technologies for container orchestration and VM-based applications, and with VMware to create a network of Centers of Excellence to accelerate cloud deployments. With the Cloud Native Computing Foundation, Intel is building what officials are calling the world’s largest testing cluster—with more than 1,000 Xeon-based server nodes—for cloud apps. In addition, the company is expanding its Cloud Builders program to include use cases around SDI.
The new Storage Builders group is designed to help develop and promote next-generation storage technologies for the cloud. Intel has more than 300 members in its cloud, storage and networking builders programs.
The Storage Builder group will create “a collaborative environment that will make it easier to innovate, build, and operate next-generation storage solutions for enterprises, cloud service providers, and communication service providers,” Andrea Nelson, director of product market at Intel, wrote in a post on the company blog. “The program will act a central source of information on next-generation solutions that meet a variety of storage use cases.”