The Great Data Migration

Big Data Migration

The IT industry is currently undergoing a data revolution. However, while organisations have already invested billions in modernising their IT systems, many of these projects have fallen flat. Most frequently, the biggest problem for these years-long, million-pound projects has been that the data itself has become too cumbersome and expensive to manage in legacy systems. The three most notable reasons that the legacy setup for data holds these projects back are as follows:

1. Operational Data

Operational data is closely linked to database and storage hardware, because that data primarily lives in relational databases. Moving an Oracle database from HP-UX running on NetApp storage in a legacy data centre to x86 Linux running on flash storage in a private cloud can take multiple weeks for a single database. Storage array-based replication and snapshotting tools only work either within the same array or within the same brand of product, so migration teams are left without an efficient method for moving the data.

2. Migration By Truck

Existing business-critical applications can’t go offline during the migration process, so there are inherently small windows of access available to the migration teams to convert, test and certify of the new architecture. Sheer size of data and the need to keep these applications running has given rise to the ineffectual practice of “migration by truck,” – literally moving servers physically to new target locations.

3. Systems Retirement

Data is often required for regulatory purposes, making it difficult to retire old systems if only to have the data (and related applications) available in case of audits, for example. As a result, the expected cost savings of modernisation projects simply never appear, as the old architecture continues to demand a significant percentage of new budgets.

But organisations can’t just throw away this data and start fresh, and staying entrenched with legacy systems costs too much in the long run. There needs to be a better way to migrate data than this error-prone slog.

Public and private clouds, services, and software-defined everything are taking over IT departments and many businesses are already trying to reconstruct infrastructure so that they can take full advantage of these tools. However the concept of re-architecting the data as well so that it can be easily updated with the rest of the new IT environment is something that very few IT teams have thought about.

In order to do this, organisations need to unlock their data from hardware, location and process. Data shouldn’t be held back by legacy systems and operations and it shouldn’t be held back by physical storage, file systems or the servers that contain it.

It’s often the case that the operational data many businesses contend with has grown large enough that it ends up locked within these systems at a geographical location, and having to move data by a physical truck should make it painfully obvious how absurd this is. The same organisations find that decades-old applications can’t be shut down, but their data update and handling processes prevent copies moving to the cloud.

There are new techniques available that help businesses unlock their data, enabling them to update remote copies and make data available anywhere and everywhere it’s needed. Some of the new techniques for creating agile data that can be easily migrated to a modern architecture include: virtualisation of data blocks, software-oriented storage management, forever-incremental snapshotting and backup, and automatic format conversion.

A few years ago these techniques would have been difficult, but today they are not only possible, but already in use in many of the largest private and governmental organisations in the world. By unlocking the data, new systems can be created, in which data can be copied and converted, improving effective operations without disrupting existing systems.

Most IT leaders already understand the agility and mobility benefits of virtualising servers. Now, the time has come to free data and even entire applications from the things that weigh them down, so that organisations can adopt practices that will support the next decade of corporate computing.

Iain Chidgey

Iain Chidgey is the EMEA VP and General Manager of Delphix, a leading global provider of agile data management platform to enterprise companies all over the world. Prior to joining Delphix, Iain was VP and General Manager EMEA for ArcSight, a leading global provider of compliance and security management solutions. Iain was responsible for setting up ArcSight in EMEA in 2004, growing it to a multimillion dollar company which culminated in a successful IPO and subsequent acquisition by Hewlett Packard in 2010 for over $1.5 billion. In his 20 years of experience in the IT Industry, Iain has held senior sales, consulting and technical roles for various high growth software organisations, including Portal and Oracle both in the EMEA and the US.