Today’s topics include the FCC investigating a rules violation by a major telecom carrier, and IBM and Nvidia partnering on a converged system for AI workloads.
The Federal Communications Commission is holding a $4.53 billion payout to wireless broadband providers while it investigates charges that a major U.S. carrier has overstated its wireless coverage. The FCC Mobility Fund grants are intended to help cover the cost for wireless providers to cover thinly populated areas, and the amount a carrier gets paid depends on its coverage.
FCC Chairman Ajit Pai announced he’s launching an investigation into whether one or more wireless carriers violated the rules for receiving Mobility Fund grants.
The preliminary review of nearly 21 million speed tests appears to show that at least one wireless carrier isn’t delivering service as claimed. If the investigation confirms that, then the carrier will receive a smaller payout and could have additional consequences.
While the FCC is not saying specifically that it’s investigating Verizon, the Rural Wireless Association filed a request for an investigation into claims by Verizon regarding its 4G LTE coverage.
IBM, in partnership with Nvidia, is expanding its artificial intelligence capabilities with a converged system that enables organizations to more easily access the massive amounts of data that is crucial for running such workloads. The system includes IBM’s Spectrum Scale scale-out file system and Nvidia’s GPU-based DGX-1 supercomputer.
According to Eric Herzog, vice president of product marketing and management for IBM storage systems, “The choice of storage is critical to [organizations’] success. Data scientists need access to large, readily accessible quantities of data supported by a wide variety of data tools. High-performance, multi-protocol shared storage for the latest AI and data tools, like TensorFlow, PyTorch and Spark, gives data teams faster access to more data with less complexity, lower costs and reliability.”
Herzog said the new converged system can “support the AI data pipeline from data preparation to training, inference and archive.”