Rod Harrison, CTO of StorCentric, parent company of Nexsan, shares some key tips to make sure your disaster recovery plan is watertight.
Today’s businesses are more at risk of cyberattacks and data loss than ever before, with the amount of data being produced constantly on the rise and the abilities of cybercriminals becoming more refined by the day. Having a disaster recovery (DR) plan in place is one of the most important ways that businesses can help to protect themselves against these risks. For those that are yet to implement one – or aren’t sure what to do with the one they have – here are some key steps to making sure your plan is watertight.
Update. Test. Repeat.
Although it sounds like the obvious starting point, making a DR plan and keeping it updated is critical. While most businesses have some form of a plan in place, many fail to regularly re-evaluate it – leaving them vulnerable should a disaster strike.
Different data sets will likely need to be recovered more quickly than others and therefore the best DR strategies will determine what areas of the business need to be recovered first. Although everyone’s ideal objective is to recover immediately with zero downtime, realistically this is unlikely to be possible. Be smart about your plan and ensure that the business-critical areas are brought back online quickest to help reduce the financial impact of downtime.
But creating a DR plan is not enough. Businesses need to frequently test it to ensure that when disaster strikes, they are ready to jump to action. According to a Forrester survey, less than 20% of organisations do a full DR test more than once per year and 20% never do a DR test at all.
It’s imperative for business continuity to not only know if your DR plan will work, but how it will work, who is involved, when you will failover and whether you can meet your recovery time objective (RTO) and recovery point objective (RPO).
Businesses should select appropriate replication technology to meet their objectives. Storage arrays can typically offer replication features and sometimes they are even included for free. As part of this, businesses should automate failback while paying close attention to the data integrity by running audits and tests.
Then, make sure you know when to do the failback. More often than not, the disaster doesn’t have a specific end point in terms of IT – things start coming back online slowly. It’s important therefore to have criteria in place to decide when you will start the failback process.
Knowledge is everything
Part of developing an effective disaster recovery plan is understanding the data you possess and knowing the difference between unstructured and structured data. Unstructured data is information that typically does not fall into an easy and straight-forward pattern and may include emails, typed documents and videos, for example.
Structured data, on the other hand, mostly refers to information that is organised and easy to search and navigate. However it often has a complex replication scheme where you have to focus on each database and often the recovery time of this data is strict. All of this makes structured data complicated and the larger the environment the harder it makes it to recover.
But if you can take the unstructured data out of the equation and use a different paradigm – for example an archive device that replicates the data between sites – organisations can simplify the overall challenges of DR as the unstructured data will simply take care of itself.
However, it is not only the data itself that must be understood but also the laws that govern it. In a perfect world, DR sites would be hundreds or thousands of miles apart but increasingly the movement of personally identifiable information (PII) from one region to another is becoming regulated.
Having data in multiple regions increases civil risks of discovery and the penalties for exposure. Therefore, take the time to evaluate the laws in each region and identify any additional requirements. Remember, a DR site should have at least the same level of security as a primary site, if not higher.
Don’t think backups are everything
One of the biggest issues in DR is only using backup – for example, if your only DR solution is backing up a server to the cloud then the recovery time could be a problem. Many backup packages claim to have the ability to run virtual machines directly from the backup image and this is great for a smaller number of VMs. However, if you try to spin-up the entire environment, the typical targets of backup appliances don’t have the input and output necessary to run multiple VMs. This emphasises the importance of testing once again, as organisations will be able to discover any limitations of the technology in use. Implementing a replicated archive for the static data would reduce the overall time to recovery, allowing businesses to get back up and running as quickly as possible.
By ensuring that your DR plan is effective and up to standard now, your business will have a much better chance of both avoiding and surviving any data disasters that may occur in the future.
And while these steps may take some time to get right, ultimately having these measures in place and keeping them top of mind will provide your business with a better understanding of the data it holds, and the value that it can reap from this.