Modern IT Infrastructures Demand A New Approach To Monitoring And Management

Network Monitoring

In an era of tight spending, reduced profit expectations, rising utility costs and environmental concerns, enterprise computing infrastructures are now increasingly reliant upon outsourced managed service and hosting providers thanks to the growing sophistication of virtualisation, SaaS, and cloud computing.

Together, these trends are marking the beginning of what some believe will be a move to the “all cloud enterprise”, an organisation that relies solely on externally hosted services for its computing infrastructure.

Whether or not this vision turns out to hold true for most enterprises, there is no doubt that external SaaS and cloud offerings are here to stay, and will play increasingly important roles in how enterprises deliver and consume business applications in the years ahead.

Opportunities and Challenges

Already, these trends are delivering significant benefits. But with this comes even more significant challenges for IT operations groups. Specifically, how in an environment with an increasing mix of externally hosted services, do enterprises monitor and manage service levels? Clearly, in today’s always-on business environments, the benefits of adopting external services can’t come at the expense of service availability or performance.

Today, virtualisation, SaaS, cloud computing, and outsourced infrastructures have made it problematic for IT operations staff to understand, let alone control, service levels. Moving forward, as organisations continue to rely increasingly on external platforms to deliver vital business services, the challenges will grow more pronounced—rendering the legacy monitoring systems of the past redundant.

In order to successfully leverage these new service delivery platforms, IT operations teams will need a cohesive, sophisticated view of the disparate, remote services on which their business relies. This will enable IT operations to understand performance and availability of services, react to and prevent problems, and optimise service delivery—regardless of the type and combination of computing environments on which those services are based.

As a result, organisations can confidently leverage the opportunities of today’s emerging trends— without increasing staffing or operational costs, or requiring new tools or training.

Challenges of Monitoring Today’s Environments with Yesterday’s Tools

It is clear then that for all the great benefits promised by today’s emerging technologies, there is a flip side – with each new paradigm presenting a new set of demands. If IT monitoring and service level management didn’t present enough of a challenge in themselves, these new environments further compound matters – making them both more challenging and more critical to success.

As a result, internal IT operations groups, already resource constrained, play a vital role in monitoring and managing service levels. This is because a large monitoring burden still falls on the end user organisation across outsourced/hosted, SaaS, and cloud environments.

For most enterprises, it’s traditionally been difficult to cost effectively monitor and manage service level delivery. With traditional monitoring solutions, monitoring a business service housed entirely in the internal data centre was difficult in its own right. Much of the frustration with legacy solutions is that, to get a full view of the performance and availability of an internal business service, IT needs to deploy anywhere from 6 to 12 products.

Furthermore, these traditional solutions were architected long before the advent of virtualisation and were solely designed to manage internally hosted environments. In today’s emerging computing environment, these limitations and complexity make monitoring service levels next to impossible.

Take an e-commerce service for example. In the very near future, that e-commerce service may rely on a SaaS vendor, a cloud-based service provider, an internal virtualised data centre, and more. If a customer reports an issue encountered during a transaction, how does an IT organisation quickly and accurately assess where the source of the issue lies?

Simply put, with traditional tools, they can’t. In the past, because of the complexity of their legacy systems, enterprises were frequently forced to settle for sub-optimal service monitoring, unacceptable TCO, and huge investments in staff time. Unfortunately, as outlined above, emerging trends are only going to exacerbate these penalties.

Five Steps for Successful Unified Monitoring

In order to affordably and effectively address today’s monitoring challenges, organisations need a monitoring solution that offers a truly unified perspective, one that offers several key capabilities:

1. An architecture that scales and extends to meet evolving challenges

Any architecture built for this environment must have the following characteristics:

  • High scalability. The architecture must scale both within and across environments.
  • A single, integrated set of components. Approaches that require different components and products and integration among them, simply can’t be deployed cost effectively in the hybrid environments of the future.
  • High availability. The architecture must be resistant to both component and communication failures.
  • Rapid deployment. The architecture must be fast and easy to deploy into all relevant environments.

2. Flexible and Extensible data collection layer

To monitor today’s emerging computing environment, organisations need a means to collect monitoring data, wherever that data exists—including across disparate platforms, virtualised and non-virtualised environments, externally and internally hosted and managed systems, and SaaS and cloud environments.

Data collection must be extensible to accommodate new metrics, whether power or facilities, or those driven by elastic and virtualised computing capabilities. The collection of information must present minimal overhead for the target systems and include the monitoring of log files. Lastly, the solution must be able to accommodate a combination of agent and agentless monitoring, depending on requirements and accessibility of systems.

3. Business service correlation across multiple computing infrastructures

While ultimately comprised of an array of systems and infrastructures, in the end what really matters is the performance of the business service, whether that’s e-commerce, email, or any other vital service a business relies on. Consequently, the monitoring data being generated across disparate sources needs to be intelligently analysed and correlated in order to deliver service level insights.

4. Intuitive visualisation and robust reporting capability

All of the monitoring data being collected and aggregated needs to be useful. Toward that end, administrators and business management need visual, intuitive dashboards, alarms, and reports—and those views need to be based on real-time status. Further, views need to be tailored based on roles, so users get only the information they need or are authorised to see. IT operations groups must be able to deliver multi-tenant based portals, especially as they become more like service providers, delivering utility computing capabilities to groups of internal business customers.

5. A flexible business model

Finally, without the right business model to support it, even the best solution won’t be fully adopted. To be viable, a product must be supported by a flexible pricing structure, one that takes into account the differences in various deployment approaches. For example, if utilising a managed service provider’s infrastructure and monitoring services, the enterprise should not be forced to “pay again” for leveraging the monitoring information provided. Cloud monitoring must be effectively licensed on a “pay-as-you-go” basis, and internal clouds should not drive up monitoring costs just because of their flexibility.

In conclusion, whilst virtualisation, SaaS, outsourcing, and cloud computing can deliver real and meaningful benefits across a range of organisations, this inevitably creates a dichotomy between the benefits of these emerging trends and the corresponding increase in the complexity and criticality of monitoring service levels.

In order to take advantage of the benefits of these emerging service delivery platforms—without encountering rapidly escalating costs and complexity—organisations need a monitoring solution that offers a truly unified perspective. The good news is that unified modern IT monitoring and service desk management technologies are now coming to the fore. Increasingly, these will allow organisations to take full advantage of the promise offered by emerging computing approaches both for today and for the long term.

SHARETweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Pin on PinterestDigg thisShare on RedditShare on TumblrShare on StumbleUponEmail this to someone

Philipp Descovich leads Nimsoft’s business in EMEA. Philipp joined Nimsoft in August 2011 and has since then established solid relationships with service providers, enterprises and public organisations, helping them to dramatically improve the value and competitiveness of their on-premise and cloud IT service offerings. Before taking EMEA responsibilities with Nimsoft, Philipp substantially grew CA Technologies’ Service Assurance business in Europe, after having worked as Executive Assistant to CA Technologies’ Chief Executive Officer in New York focusing on Strategy and M&A activities. Prior to that, Philipp held various leadership positions within CA Technologies’ Sales and Technology Services organisation.