Google has announced the completion of its acquisition of wearable company Fitbit. The announcement was made by Google Senior Vice President, Devices & Services…
There is a story that has been doing the rounds in the mainstream media of late that is really beginning to annoy me. The story is about an IT administrator who took out 88 virtual servers in a revenge attack against a Japanese pharmaceutical company that terminated his contract to help to cut costs.
The thing that is so frustrating is that all of the focus seems to highlight the idea that virtualisation is essentially dangerous. If you take a few minutes to read any of the articles, however, it becomes clear that the offender was exceedingly knowledgeable about Shionogi’s computer network to the point that he was rehired after an initial downsizing a year ago.
As an administrator, he more than likely had access to a range of commonly used passwords and knew more about the systems and network than anybody else. Armed with that sort of knowledge, it wouldn’t matter whether the servers were virtual or not. When you discover that the attack froze Shionogi’s operations for a number of days, costing the company hundreds of thousands of dollars, it becomes even more clear that Shionogi’s infrastructure was poorly set up to begin with.
Virtualisation is meant to massively reduce the impact of a disaster like this, and if properly implemented can significantly reduce downtime. For most enterprise-level virtualisation platforms, the operating system and application environment should run off an essentially ‘read-only’ disk image. All changing data, such as the content in a database, user data, and directory information is usually stored on NAS or a SAN, or at least on a separate image. What this means is that disk images can be replicated and stored as hard copies, so that they are ready to deploy in minutes.
Since the actual shifting data is usually stored in an environment that provides inherent redundancy and that can be backed up regularly, taking out a virtual server in itself should not impact operations for longer than it takes to reload an image on your virtualisation platform.
Sure, things can get a little more complicated. Your virtualisation platform may have some configuration settings that are specific to virtual hosts, but these should be easy enough to backup as well.
In many ways virtualisation sets about making better use of hardware. Currently, your average home PC is beefy enough to run two or more operating systems at once. A properly managed virtualisation platform will allow you to intelligently share resources between multiple virtual machines so that you can maximise usage of your hardware and reduce the number of physical systems that need to be managed.
It also reduces costs, since you will be using less hardware, less electricity, less cooling, fewer networking facilities, less physical rack-space. You run a greener, cheaper and more easily managed environment than you would otherwise.
Virtualisation also aims to reduce administrative burden. Most platforms come with a load of administration tools and facilities to make it possible to manage hundreds of virtual servers from a single point of access. This is the most likely reason that the media is on a bit of a run with this story. The idea is that because you can perform administrative tasks quickly and easily using a single point of access virtualisation, as a whole, must be a security hazard.
This myth is exacerbated by the fact that you can provide remote access to your virtualisation platform. The fact is, that by having a single point of access to administer all of your systems, you can easily reduce the administrative tasks involved in preventing access to that administrative point. By simply disabling an administrator account, that administrator no longer has access to any of the systems within your infrastructure.
The sad thing about stories like this is that they target the wrong culprit. Okay, the guilty guy is named and shamed, but the spotlight hangs over the technology like it was the real problem behind the attack. The real story is that if you provide low-level access for your systems to anyone at all, you need to take responsibility for the consequences. The most obvious thing, and from my experience this is frequently overlooked, is the revocation of system access for any user that no longer requires it.
I know that I still have access to systems where I have worked up to around ten years ago, and I can promise that none of those are virtualised. I wrote an article recently highlighting how security is always put at the bottom of the pile when it comes to business interests, and this story just indicates how badly this can impact a business.
So, what does the Shionogi incident teach us about virtualisation? I would say that it teaches us next to nothing. The incident would have happened regardless of whether the systems were virtualised or not. There is one point to consider though: virtualisation platforms ease so many tasks that it makes any administrative activity, good or bad, extremely efficient.
That means that the antihero of our story could do more damage more quickly than if he was acting on individual servers, however on the same grounds it should have been equally simple to get all of those servers back up and running. What the whole story really tells us is that companies often have poorly set up infrastructure, weak security policies and limited auditing. It also tells us that most of the mainstream media loves to make a mountain out of a molehill, but we already knew that…