Observability is a hot tech topic yet has also become one of the industry’s most overused buzzwords. The term means understanding the behavior, performance, and other aspects of cloud infrastructure and cloud apps based on the data they generate, such as metrics, events, logs and traces (MELT). Observability relies on telemetry that comes from endpoints and services in multicloud environments.
The reason observability has grown in importance is that managing user experience has become increasingly difficult with the shift to the cloud. IT no longer has the tight control it once did over the infrastructure, as it now runs in public clouds, people’s homes, and other complex environments like edge computing.
As the old axiom goes, “you can’t manage or secure what you can’t see,” and observability lets you see a lot more.
Observability Requires a Single Source of Truth
The challenge for IT buyers is that as infrastructure converges, the ability to a have single source of truth becomes critical to IT operations, especially network and security operations. That’s why observability has gained so much traction among IT teams, which are seeking greater observability into increasingly complex computing environments.
In my latest ZKast interview, I spoke with Shehzad Merchant, Chief Technology Officer at Gigamon, a provider of network visibility solutions that can monitor usage all the way up to the infrastructure stack, including containers, virtual machines, and the cloud.
Merchant discussed the importance of observability and using network intelligence to address today’s challenges. Highlights of the ZKast interview, done in conjunction with eWEEK eSPEAKS, are below.
- The personas in observability are primarily DevOps (software development and IT operations) and CloudOps (cloud operations). These personas are scripting, automation programming, and dealing with very large data sets and complex infrastructure by querying data to extract meaningful intelligence.
- Observability is associated with infrastructure monitoring and application performance monitoring, but not security. Although observability can be applied to security, it’s not sufficient on its own. One of the first things bad actors do is to turn off telemetry, which creates gaps in observability.
- Observability typically looks at things from the inside out, while the outside in perspective is lacking in observability. That’s a challenge many organizations are facing across their hybrid cloud and multi-cloud environments.
- There needs to be consistency across all different cloud environments in terms of observability, telemetry, troubleshooting, and security. Vulnerabilities transcend boundaries and bad actors take advantage of inconsistencies.
- Deep observability—looking at network intelligence from the outside in—can be successfully applied to security. A key benefit is real-time, immutable telemetry that can’t be turned off or changed by bad actors. Deep observability offers a comprehensive view of hybrid networks and closes the gaps in observability.
- Network telemetry has a footprint that can go all the way up the stack. It can be used to identity apps, their behaviors, and dependencies. It’s relevant to both NetOps (network operations) and SecOps (security operations) teams. DevOps teams are good at scripting automation and working with infrastructure as code, but not at securing up the stack.
- Currently there is a shift where DevOps is intersecting with SecOps. A new DevSecOps persona is beginning to evolve. As long as the two teams have access to the right telemetry data, the best of both worlds can come together. This can help security teams move faster toward cloud observability.
- In conclusion, observability is here to stay. It’s evolving beyond MELT, thanks to deep observability. Gigamon’s vision is to provide a solution for deep observability and fill the gaps that exist today in infrastructure and performance monitoring, as well as security.
Also see: Top Cloud Service Providers & Companies