Close
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
Read Down
Sign in
Close
Welcome!Log into your account
Forgot your password?
Read Down
Password recovery
Recover your password
Close
Search
Logo
Subscribe
Logo
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
More
    Subscribe
    Home Latest News
    • Servers

    Intel, Nvidia Trade Shots Over AI, Deep Learning

    Written by

    Jeff Burt
    Published August 24, 2016
    Share
    Facebook
    Twitter
    Linkedin

      eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

      Artificial intelligence and machine learning are opening up another front in the sprawling competition between Intel and Nvidia that has stretched from the data center and high-performance computing environments to autonomous vehicles.

      Both chip makers see the nascent artificial intelligence (AI) space—and the machine learning that helps enable it—as key growth areas and have made significant recent pushes into the market, and each sees the other as primary competitors.

      Now the two companies are looking to grab mindshare around AI in the industry by both touting their own technologies while throwing shade at the other’s. As Intel executives at the Intel Developer Forum (IDF) last week unveiled a range of moves the company is making to address the needs in the emerging market, Nvidia officials fired off a post on the vendor’s blog questioning some of the benchmark numbers Intel was using comparing its many-core Xeon Phi processors to Nvidia’s GPUs.

      Ian Buck, vice president and general manager of Nvidia’s accelerated computing unit, in the blog post downplayed methods Intel used in trying to pump up the benchmark numbers for Xeon Phi.

      “While we can correct each of their wrong claims, we think deep learning testing against old Kepler GPUs and outdated software versions are mistakes that are easily fixed in order to keep the industry up to date,” Buck wrote. “It’s great that Intel is now working on deep learning. This is the most important computing revolution with the era of AI upon us and deep learning is too big to ignore. But they should get their facts straight.”

      In his own blog post this week, Jason Waxman, corporate vice president in the Intel’s Data Center Group and general manager of the company’s Data Center Solutions Group, pushed back, noting what he said is the company’s strong position as the AI market grows and the worry that may cause competitors.

      “However, arguing over publicly available performance benchmarks is a waste of time,” Waxman wrote. “It’s Intel’s practice to base performance claims on the latest publicly available information at the time the claim is published, and we stand by our data.”

      The argument echoes similar ones the two companies have made in the past in such areas as high-performance computing (HPC), where Intel processors are the dominant CPU in the systems but Nvidia’s GPUs are increasingly being used as accelerators to help boost the performance and power efficiency of the machines. Intel has responded with Xeon Phi, which initially could be used only as coprocessors for accelerating performance but, since the release last year of the 72-core Knights Landing chip, can now be used as the primary processor.

      Intel has argued that running HPC workloads on its x86-based architecture—both its Xeon and Xeon Phi chips—makes sense, while Nvidia officials have said GPUs offer greater performance in parallel processing environments.

      Some of the debate around AI is similar. Nvidia executives for the past several years have said that AI and machine learning—which aims to train neural networks to enable artificial intelligence so systems can learn from experience, much like a human brain does—are key technologies for the company’s future. In April, Nvidia unveiled the Tesla P100, a massive chip based on Nvidia’s 16-nanometer Pascal architecture that packs 150 billion transistors, as well as the DGX-1, a supercomputer for deep learning and AI that combines eight Tesla P100 GPUs with two Intel Xeon server chips to drive 170 teraflops of performance in a 3U (5.25-inch) form factor.

      “Deep learning has the potential to revolutionize computing, improve our lives, improve the efficiency and intelligence of our business systems, and deliver advancements that will help humanity in profound ways,” Nvidia’s Buck wrote. “That’s why we’ve been enhancing the design of our parallel processors and creating software and technologies to accelerate deep learning for many years. Our dedication to deep learning is deep and broad. Every framework has NVIDIA-optimized support, and every major deep learning researcher, laboratory and company is using NVIDIA GPUs.”

      In his own blog post, Intel’s Waxman said Intel is “inherently well-positioned to support the machine learning revolution.” Intel chips power more than 97 percent of servers used for running machine learning workloads, and “while there’s been much talk about the value of GPUs for machine learning, the fact is that fewer than 3 percent of all servers deployed for machine learning last year used a GPU.”

      He also noted other efforts by Intel in the AI space, including the planned release of a Xeon Phi chip dubbed “Knights Mill” with enhanced variable precision and flexible high-capacity memory that is aimed at AI workloads, the commitment to open frameworks for machine learning—including Caffe and Theano—and the acquisition of Nervana Systems and its machine learning technologies.

      Jeff Burt
      Jeff Burt
      Jeffrey Burt has been with eWEEK since 2000, covering an array of areas that includes servers, networking, PCs, processors, converged infrastructure, unified communications and the Internet of things.

      Get the Free Newsletter!

      Subscribe to Daily Tech Insider for top news, trends & analysis

      Get the Free Newsletter!

      Subscribe to Daily Tech Insider for top news, trends & analysis

      MOST POPULAR ARTICLES

      Artificial Intelligence

      9 Best AI 3D Generators You Need...

      Sam Rinko - June 25, 2024 0
      AI 3D Generators are powerful tools for many different industries. Discover the best AI 3D Generators, and learn which is best for your specific use case.
      Read more
      Cloud

      RingCentral Expands Its Collaboration Platform

      Zeus Kerravala - November 22, 2023 0
      RingCentral adds AI-enabled contact center and hybrid event products to its suite of collaboration services.
      Read more
      Artificial Intelligence

      8 Best AI Data Analytics Software &...

      Aminu Abdullahi - January 18, 2024 0
      Learn the top AI data analytics software to use. Compare AI data analytics solutions & features to make the best choice for your business.
      Read more
      Latest News

      Zeus Kerravala on Networking: Multicloud, 5G, and...

      James Maguire - December 16, 2022 0
      I spoke with Zeus Kerravala, industry analyst at ZK Research, about the rapid changes in enterprise networking, as tech advances and digital transformation prompt...
      Read more
      Video

      Datadog President Amit Agarwal on Trends in...

      James Maguire - November 11, 2022 0
      I spoke with Amit Agarwal, President of Datadog, about infrastructure observability, from current trends to key challenges to the future of this rapidly growing...
      Read more
      Logo

      eWeek has the latest technology news and analysis, buying guides, and product reviews for IT professionals and technology buyers. The site’s focus is on innovative solutions and covering in-depth technical content. eWeek stays on the cutting edge of technology news and IT trends through interviews and expert analysis. Gain insight from top innovators and thought leaders in the fields of IT, business, enterprise software, startups, and more.

      Facebook
      Linkedin
      RSS
      Twitter
      Youtube

      Advertisers

      Advertise with TechnologyAdvice on eWeek and our other IT-focused platforms.

      Advertise with Us

      Menu

      • About eWeek
      • Subscribe to our Newsletter
      • Latest News

      Our Brands

      • Privacy Policy
      • Terms
      • About
      • Contact
      • Advertise
      • Sitemap
      • California – Do Not Sell My Information

      Property of TechnologyAdvice.
      © 2024 TechnologyAdvice. All Rights Reserved

      Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.

      ×