In the early days of computing, equipment was expensive and processing was a very precious commodity. Before the emergence of viruses and other modern security threats, it made perfect sense to run several different programs on the same piece of equipment.
Over time, programs became more complex, hardware became cheaper, systems became more difficult to administer, and large groups of networked systems became more easily accessible. This created vulnerabilities which could be exploited by curious enthusiasts or even malicious criminals. In order to protect the critical data produced, shared and stored on these systems, new security procedures and best practices had to be developed.
One such best-practice was the “one machine, one service” philosophy. In essence, this approach was based on the idea that if an application was exploited, then a hacker could use this hole to access other systems on that same machine.
Or in more practical terms, if a single application were to cause an OS crash, it would take every other application from that same system offline with it.
In order to prevent this, IT admins would invest heavily in purchasing a physical box for each application. This way, every system could exist and operate in an environment which was completely isolated from other systems. In the event of a problem, the damage would be confined to a very small footprint.
- The Exchange server runs Exchange and nothing else
- The SharePoint server runs SharePoint and nothing else
- The Oracle server runs the database and nothing else
- The Apache server runs web hosting and nothing else
Of course, there were certain exceptions to this rule. Although it was possible to maintain a backup server which would remotely back up other machines, there were many benefits and features that came from having backup software installed directly on the target server.
For this reason, backup was often considered the exception to the “one machine, one service” rule.
Today, data centres are increasingly moving towards virtualisation in order to exploit cost savings and efficiencies that come from consolidation. Virtualised systems still respect the “one machine, one service” rule, but these machines are virtual entities instead of physical boxes. Despite sharing common hardware, they remain isolated in every other aspect.
One of the benefits of virtualisation is that new machines can be quickly provisioned and deployed without much thought about the consequence.
In the past, an IT administrator might’ve required several months and buy-in from other departments before purchasing and installing a new server. Today, new servers can be cheaply and quickly launched without incurring significant costs or risks. And if this project is a failure, it’s a simple matter to delete the server.
One undesired consequence of this has been virtual server sprawl. Because the rate at which new servers are implemented has accelerated, the backup, high-availability and disaster recovery for these systems has become much more complicated. It’s no longer desirable to take on the burden of installing, configuring and managing a new backup application every time a new virtual server is added.
For simplified management, we’re not seeing a return to the “one machine, one service” approach being applied to the backup process. Because we’re now seeing the emergence of modern VM backup offerings which incorporate many features which had previously only been available from installed backup applications, administrators are increasingly moving towards “hypervisor-level” backups of their virtualised systems.
Simplicity is particularly important when it comes to security-related processes such as backup, because complexity in these situations signifies an increase in potential risk. In highly virtualised environments, constant change can lead to human error or other backup problems down the road. In this context, backing up from the hypervisor level not only makes life easier, but it also delivers peace-of-mind.