Enterprises using SmartNICs are reaping the benefits of network performance due to their ability to accelerate traffic and boost agility through programmability.
What exactly is a SmartNIC, and what does it do? A NIC (network interface card) plugs into a server or storage array to enable connectivity to an Ethernet network. A DPU (data processing unit)-based SmartNIC goes beyond a NIC’s simple connectivity by implementing network traffic processing on the NIC that would normally be performed by the CPU (central processing unit). Using its own on-board processor, the SmartNIC is able to perform any combination of encryption/decryption, firewall, TCP/IP and HTTP processing.
In this article, using industry information from reconfigurable computing provider Napatech, we offer five data points on why SmartNICs are currently seeing an upswing in use in next-gen enterprise systems.
Data Point No. 1: Networks are under unprecedented stress.
The rise of trends including AI, ML, increased cybersecurity, hyperscale architectures and cloud services has placed an unprecedented demand on the network, particularly with respect to performance and uptime. These factors, along with the COVID-induced spike in network use, are driving an increase in network bandwidth, number of users and number of active network flows – all with increased compute complexity. The growth in network traffic and increasing sophistication of attack vectors are placing enormous strain on the CPUs of compute nodes in server infrastructure.
Data Point No. 2: What’s driving the demand for network performance.
There are a number of factors driving the demand for better network performance, including higher throughput and lower latency. Many services demand far lower latency in support of real-time applications and services deployed close to the network edge for these applications to be effective. Examples include video sessions (Zoom, Microsoft Teams, etc.), 5G and autonomous vehicles.
Other factors include the need to support traditional network services as well as those on the cusp of incredible growth, such as 5G and IoT. These assert incredible performance demands on networks. Similarly, there’s a new for a higher number of sessions and flows to accommodate more users and applications in a stateful flow.
Additional factors include:
- feature velocity to keep pace with the speed of innovation in software-defined networks;
- the need to improve the security posture in a distributed cloud and edge network design requires additional compute power at higher network bandwidths; and
- operation, orchestration and management for a massive number of network elements at scale.
Data Point No. 3: Offloading the packet processing workload
One of the ways these systems are adapting is by offloading more of the packet processing workload from the CPU to an FPGA-based SmartNIC. SmartNICs boost server performance in the cloud and private data centers through offloading networking processing workloads and tasks from server CPUs. Driven by the expansion in data center network traffic and compute complexity, adopting SmartNICs provides a processing architecture where the SmartNIC offers compute for certain workloads and offloads those from the general-purpose compute cores, thus adding efficiency to the overall solution.
Data Point No. 4: Many data plane workloads are best supported by SmartNICs.
In any virtualized network infrastructure, there are significant data plane networking requirements within the server. These are outside of the scope of the application. The networking workloads are particularly costly compute-wise. Virtual switching alone can take up as much greater than 90% of a server’s available CPU resources. Network administrators and architects focus on the applications running in VMs that are monetizable or that monitor and protect the network. Offloading networking tasks gives those important resources back to the application layer.
Cryptography algorithms are one of the most rapidly changing aspects of dataplane processing and also the most complex and compute-intensive. SmartNICs enable the offloading of this costly task while being programmable, thus providing the option to deploy a new crypto algorithm in hardware simply by updating the SmartNIC software.
Data Point No. 5: Truths about SmartNICs
Organizations need to understand the alternatives and criteria for SmartNIC deployment. The simple answer is just to add compute power to the next generation of x86 processors, but the undeniable fact is that Moore’s Law no longer holds true. Therefore, the notion of just “throwing compute” at the problem cannot work.
SmartNICs can be price- and power-competitive with standard NICs, which takes away the argument that SmartNICs are too expensive and too power-hungry. SmartNICs are not too complex, either. Deploying SmartNICs can be done as easily as with standard NICs and software.
The use of SmartNICs dramatically reduces the total cost of ownership (TCO) of deploying network services at scale. By increasing the computing power of each compute node with SmartNICs, fewer servers can provide the same compute as a solution using standard NICs, thus reducing upfront costs, footprint, power and cooling requirements.
SmartNICs are future-proofed. Because they are completely programmable, SmartNICs can assure organizations that their investment in the network deployment will stand the test of time. Non-programmable solutions are hampered by the long design times for ASICs that provide good performance but at the cost of being completely static. With how quickly networks, protocols, encapsulations and crypto algorithms change, not to mention the rapid posture security professionals need to maintain, the ability to change hardware at the speed of software is paramount to success.
If you have a suggestion for an eWEEK Data Points article, email cpreimesberger@eweek.com.