Free Your Data Centre’s Data!

Data centers and the equipment they house have come a long way since the days of the mainframe. Despite how different the computing systems may look today, they actually share a lot of similarities. Cloud computing and virtualization look a lot like distributed mainframes, and systems are moving back to old benchmarks that incorporate power and workload.

Energy and Green IT have become the hot topic, and a recent article illustrated to me how much confusion is out there in the market. Vendors are all competing for the same resources and budget, whereas the media simply doesn’t have enough time to investigate the industry as fully as analysts have done (and are still doing).

I’ve been encouraging analysts from Gartner, IDC, Forrester, and others to put out a “data center energy management landscape” document, to set the record straight on how each of the vendors interact and compete. Unfortunately no such document exists yet.

There’s a growing trend towards greater instrumentation. The trend started long before the Green IT movement — companies like HP, Dell, IBM, Fujitsu, and Sun starting putting more measurement in their systems to ensure availability and run closer to the limits. All of these systems today include environmental and power sensors built in. Even your desktop PC has monitored temperatures and fan speeds for almost as long as I can remember. But they’re not the only ones — building management systems and cooling infrastructure have sensors, VFDs have sensors, PDUs now come with power meters, and even racks from APC can arrive with sensors installed.

These sensors are there to ensure the inlet temperature of the equipment stays below its rated threshold, thus ensuring the equipment remains available. Most IT operators are amazed when I show them this information from their own systems.

Since we’ve established that the sensors already exist today, are networked, and can be accessed through open standard APIs, what do we do with all of this data? This is the role of software. This is not the first time that an influx of data has resulted in a huge need for software — Tivoli manages the CPU, disk, system, memory, etc data to manage performance, Symantec manages storage allocation, utilization, I/O throughput, SAN speeds, configuration, etc to manage storage performance, etc.

And even in cases where it appears that there’s no other way to get data than to install a meter,  revolutionary virtual meters use indicators like CPU utilization to compute the actual power consumption and environmental conditions. These are indicators that you monitor anyway to ensure application availability.

This is good news, it means you already have all the meters that you need! And these existing measurements are a lot cheaper, less disruptive, and require fewer resources from your staff. Using your existing data is a win all around!

With all these meters already in the data center, you can probably understand why I definitely disagreed with statements in this recent article by Synapsense’s CEO, Peter Van Deventer. He says it takes responsibility for making sure the sensors, networking and software platform work as a complete package, adding, “If you just provide the sensors, or you just provide the software, you’re going to fail.”

We’ve seen this story before. The proprietary packaged solution that has no open APIs or standards. This is vendor lock-in at its finest. Sensor companies like RFCode have supported an active developer community with open APIs so that their data can be used in a wide array of solutions. HP has supported and documented iLO as an open way to access their environmental data, and Dell and Intel have openly supported IPMI. Even the old-school building management systems from Johnson Controls and Siemens now support the open, industry-standard OPC. So then why is it, when I’ve repeatedly asked Synapsense for their data interfaces and API references, that this information is unavailable?

Mr. Van Deventer goes on to say that customers are ready for this closed, black-box system to automate their infrastructure. Let’s say you run a large enterprise data center. You’ve probably employed building automation tools from Johnson Controls and maybe even runtime automation from BMC. While a startup may come along and offer a new automation technology, the reality is the business operations prevent automation from adoption.

Not only is liability an issue, but IT and facility managers are not yet ready to give up this control.  Trust me, I wish they would!  In meetings with some of the largest banks in the world today, I asked each of them whether they would automate their facility or IT infrastructure.  The across-the-board response was, “We don’t trust any vendor to automate our infrastructure; automation is going to take a really long time.”

SHARETweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Pin on PinterestDigg thisShare on RedditShare on TumblrShare on StumbleUponEmail this to someone

Joe Polastre is co-founder and chief technology officer at Sentilla. Joe is responsible for defining and implementing the company’s global technology and product strategy. Winner of the 2009 Silicon Valley/San Jose Business Journal 40 Under 40 award and named one of BusinessWeek’s Best Young Tech Entrepreneurs, Joe often speaks about energy management and the role of physical computing - where information from the physical world is used to make energy efficiency decisions. Before joining Sentilla, Joe held software development and product manager positions with IBM, Microsoft, and Intel. Joe is active in numerous organisations, including The Green Grid, US Green Building Council, ACM, and IEEE. Joe holds M.S. and Ph.D. degrees in Computer Science from University of California, Berkeley, and a B.S. in Computer Science from Cornell University.