Cloud computing for disaster prone areas


With the very real threat of a nuclear meltdown following the devastating earthquake and tsunami on the 11 March, the focus of the world shifts to Japan as the industrial giant braces itself for its worst disaster in history. From an economic perspective, stabilisation will depend on the recovery process and, if not implemented soon enough, could impede the country’s future growth.

Considering that Japan houses some of the world’s largest entities in technology and is the third largest national economy with a labour force of around 66-million workers, recovery strategies are, of course, already in place. Most of Japan’s cloud computing data centers that have sprouted around Tokyo are fortunately intact and operational, allowing for an efficient data recovery process. Or, as a very last resort, as an effective measure for data transfer to an offshore cloud – given that the radioactive Fukushima nuclear plant, some 250km from Tokyo, is in danger of a nuclear meltdown.

In lieu with the number of real disasters we’ve seen over the last few months, the severe earthquake in Christchurch, New Zealand had less severity on  the business infrastructure, prompting effective recovery systems that enabled mapping out availability of functioning ATMs, running water and closed-off roads – all within days of the 6.3 magnitude earthquake. The reason for such an effective recovery/rebuild strategy lies with cloud computing, with the majority of businesses who have employed cloud services faring the best in terms of data recovery.

Following the earthquake and an analysis on the country’s technological infrastructure, a research commissioned by Microsoft named New Zealand the most ‘cloud-savvy’ country in the Asian Pacific with 81 percent of its companies viewing cloud computing as a high priority and with funding in place for implementation of cloud-based services.

But the scope for disasters should not be limited simply to natural disasters, terrorist attacks or political turmoil. Bankruptcy,  litigation, insolvency or further corporate legalities could have similar adverse effects for business growth, particularly if the cause lies with a fairly significant vendor. The recovery process which follows these ‘disasters’ can be made more manageable and simpler by implementing cloud services.

So, what constitutes an acceptable response to a real disaster? Or, more precisely, how can an organisation go about implementing cloud?

The answer, according a report on disaster recovery using cloud by Online Tech President and CEO, Mike Klein, depends on two things: your recovery time objective, which is how quick you need to recover from your disaster and your budget. “The two are kind of tied together. If, for example, if you need to recover your data in a seven-day time frame, it’s going to be a lot more cost-effective than if you need to make sure that you can recover your data and your infrastructure in a five-minute time frame.”

Continuing his report, Klein highlights the cloud strategy Online Tech employed in its data recovery process. The organisation moved its entire IT infrastructure – 23 servers, into a private cloud at the end of 2009, converting them into virtual servers, while maintaining two physical servers or physical hosts, as well as a SAN environment.

Data redundancy, in terms of geography, is also a key facet when migrating to cloud. Relying on a single geographical entity is mistake – as is the reliance on a single internet service provider, even if the ISP already maintains strong, scalable cloud infrastructures. A Tokyo-based organisation would be wise to employ this geographical redundancy outside of the Asian Pacific rim for example, opting for cloud based hosting services in Europe or the US where such scalable entities allow for elasticity in terms of budgets and size.

Another key challenge is the human factor. Irrespective of the location, time or budget size and constraints, the primary management of cloud data centers is still by employees, many who adhere to some or other organisational structure whereby an administrator for a particular server farm or data cluster would maintain the relevant passwords and key access methods. If, should a disaster occur which would jeopardize human life, the data recovery process would successively slow down or stop altogether. Regular synchronization of production environment across the cloud is the obvious solution: As long as your data is redundantly backed up and online, it’s fairly secure.

Should a nuclear meltdown occur in Japan, it is ironic that data recovery would be an altogether more streamlined process than ‘human recovery’ where the emphasis should of course be on saving lives, as opposed to saving data. But this is the somewhat morbid reality of the digital age and the cloud: transferring, backing up and saving data to remote locations or physical hardware may not take preference over saving human lives, but the process is simpler.



Sign up to our newsletter to get the latest in digital insights. sign up

Welcome to Memeburn

Sign up to our newsletter to get the latest in digital insights.