This page contains a Flash digital edition of a book.
Data Protection Guidelines 5 Steps for Business Continuity


WRITTEN BY JON TOIGO A


LONG with managing storage performance and capacity, data storage administrators are


usually called upon to devise ways to protect data placed in their charge. In the past, this was just a matter of making a copy of output data to one or more storage media targets to another array on the raised floor (to guard against a localized equipment failure) and/or to removable media subsequently transported off-site or to a stand of disk at the other end of a wide-area network (WAN) connection (to guard against an interruption event with a broader geographical footprint). However, as the volume of data


generated by businesses has grown over time, and the number of arrays used to store data has proliferated, the simple matter of establishing and following a set of data protection guidelines has become much more complex, challenging, and costly. Data protection efficiency is a gauge


of how well administrators are coping with the protection of burgeoning data: assigning appropriate data protection methods to bits, testing, and validating the solvency of their data protection techniques, and delivering “defense in-depth” in a budget-savvy way. Here are five guidelines for achieving higher data protection efficiency outcomes.


1. Drop the Anti-Tape Mindset You would have to be living under


a rock not to have heard the “tape is dead” pitch from array vendors. Disk array makers have waged a massive marketing campaign since the late 1990s to promote disk-centric tape alternatives such as deduplicating virtual tape appliances that seems to have succeeded in shaping buyer perceptions and strategies. Witness the latest reports from


industry analysts: of the 20+ exabytes


of external disk storage deployed by 2011, almost half the capacity was being used to make copies of the other half. In many companies, local-area network (LAN)-based mirroring has been augmented by WAN-based data replication, an entirely disk-based data protection methodology touted by array makers as the meme for data protection in the 21st Century. While array-to-array mirroring and


replication may be appropriate as a data protection method for some data, it’s by no means a sensible strategy for all data. This observation derives from a fundamental point: Data inherits its criticality from the business process whose applications and end users it supports. Not all business processes require “always on” failover-based recovery strategies, which tend to be the most expensive approach to recovery. Those that do “may” be well served by WAN-based mirroring, but even this isn’t a full solution to the problem of data protection. By contrast, tape backup provides


an efficient means for protecting data of applications that don’t require “always on” services. Restoring data from tape may require more time than simple re-pointing of applications to an alternative disk-based store, but it’s substantially less expensive and in many cases more reliable. Smart data storage administrators perform tape backups of mirrored arrays in simple recognition that annual disk failure rates are estimated to be between 7% and 14% annually. Data protection usually requires a mix of technologies.


2. Understand WAN- Based Replication WAN-based disk-to-disk replication


is only a valid strategy when it doesn’t create data deltas that obviate recovery time objectives (RTOs) and recovery


24 WWW.PCCONNECTION.COM 1.800.800.0014


point objectives (RPOs). A delta—or difference—in the state of data at the production data center and its mirror at a recovery data center results whenever data traverses shared network pipes greater than 18 kilometers. Part of this has to do with distance-


induced latency—how fast you can push data across a WAN connection. It’s been estimated that for every 100 km (62 miles) data moves over a SONET link, the remote array lags behind the primary by approximately 12 SCSI operations. That’s simply a speed-of-light issue, and we can’t argue with Einstein. Adding to the deltas created by


distance latency are the effects of “jitter,” delays that result from using a shared network service. Depending on the locations of your primary and backup facilities, the impact of jitter can be minimal or profound. Despite nominal or rated speeds of WAN pipes, one Sacramento, Calif.-based firm that was seeking to replicate data to another site in the Silicon Valley area reported unpredictability in transfer speeds and feeds ranging from a few seconds to several hours—a function of routing through a network with nine different carriers. The bottom line is that nominal


rated speeds of WAN services are meaningless. Variables related to everything from processing delays and routing protocols, to buffer bloat and packet resends can impact transport efficiency. Even a company with the coin to afford OC-192 pipes needs to understand that a minimum of two hours will be required to move 10 TB. That’s why the fastest way to move data over distance continues to be the use of a carrier pigeon (Google “IP over Avian Carrier” for more information).


IMAGE © KONSTANTIN YUGANOV / FOTOLIA


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36