Network General on May 7 will deliver on some of the promises it made last September with the launch of its new Network Intelligence Suite and NetworkDNA Architecture.
Both the architecture and the suite, which Network General beefed up with new Virtualization Forensics, NetFlow data and application performance management functions, are intended to broaden the companys reach beyond the network engineering trenches and into higher-value business service management.
“We can still crack packets with the best of them, but [with the Network Intelligence Suite] we stand for business service assurance as much as anything else,” said CEO Bill Gibson, in San Jose, Calif.
Network General leveraged the business container technology it acquired with NetVigil in 2006 to provide more holistic management of applications that include VMware and Microsoft virtual servers, the company said.
The containers combine relationship data on the components that make up a business service. The new Virtualization Forensics function allows NetVigil users to create a business container that includes elements outside the virtual environment which help to make up a given service, combined with virtualization activity data from VMware ESX servers and Microsoft Virtual Servers for a single view in the containers dashboard.
That capability is something not addressed by virtualization vendors in the tools they offer along with their virtual servers, according to James Messer, director of technical marketing at Network General.
The Virtualization Forensics container for NetVigil can automatically find servers that make up a service as well as automatically define performance thresholds and alarms for them. It also isolates faults at the physical and virtual server level and allows operators to compare availability and performance data across virtual machines and physical devices that deliver a given business service.
The integration enabled through the business containers can help to speed troubleshooting for critical applications, said existing user David Jung, technical lead in the Infrastructure Engineering Security group at Hyundai Information Services North America, in Irvine, Calif.
“If youre using different tools, it takes 10 times longer [to troubleshoot a problem]. Now you just click on some symptoms and you can easily get there for specific issues,” he said.
The container technology and better integration and analytics of network, application and server performance metrics also improve the ability of different IT operations groups to collaborate to solve problems, said Jasmine Noel, an industry analyst with Ptak, Noel & Associates, in New York.
“The world is not siloed any more. If you are a network manager, you will have more visibility into what the applications are doing so you can communicate with the application owner [to resolve faults]. And its better for application managers because they have more dashboard information so they dont have to call the network manager for more information,” Noel said.
Network General has also made it easier for IT groups to collaborate through new application performance dashboards for its Sniffer and InfiniStream network analysis tools. The dashboards provide a view into 180 different application performance metrics gathered from the network via the probes to help pinpoint application performance degradation. It reduces the complexity of that exercise when done using packet decode analysis.
“Were doing full end-to-end identification of the application and looking at response times. We can compare response times over the last 30 days and see where the slowest response times are and then decide how to make them perform more efficiently,” Messer said.
The dashboards can be customized to fit different roles in the IT organization. “We can create views for the network guy, the database guy [and] the Web server guy and share them, so they can all look at the same type of information,” he said.
Network General also correlated component-level metrics from those probes in NetVigil so that operators can compare the performance of each application with bandwidth utilization, router performance, server availability and more, to “bridge the gap between network level, server and service-level information from the different groups,” Messer said.
At a time when more and more applications are being deployed to streamline how customers interact with businesses, such capabilities are now more important than ever, Gibson said. “As an industry we are putting more applications to work. The ability to track and correlate performance of the application with the IT infrastructure and see how it impacts business service delivery is key,” he said.
Finally, Network General addressed the growing chorus of network operators looking to exploit NetFlow statistics gathered by switches and routers to gain some insight into the performance delivered to remote sites. The company added a new NetFlow collector option for its Visualizer reporting system to allow network operators to analyze and view NetFlow data alongside data gathered by Network General probes from a single console. The intent is to help reduce the complexity of NetFlow.
“Were making it more intelligent by combining NetFlow data with all the other data we collect and analyze. We apply that intelligence to a baseline [that changes over time as network utilization ebbs and flows],” Messer said. “We wont tell you youre above a certain range at times when thats normal in network traffic, so that reduces false positives.”
“NetFlow may not be the most user-friendly thing in the world, but it scales nicely for remote offices [where instrumentation is lacking],” said Dennis Drogseth, an industry analyst with Enterprise Management Associates.
The new offerings will become available between now and May 15.