This page contains a Flash digital edition of a book.
Should CONVENTIONAL


BACKUP Just Back Off? Why Traditional Processes Might Be Here to Stay


30 BY TONY ASARO F


OR a long time now, people have been predicting the death of backup, but with Rasputin-like tenacity, it just won’t die. Stab it, poison it,


shoot it and then drown it—it’s still a dominant and pervasive technology throughout the known universe. And while some may want to believe that backup has one foot in the grave and the other on a banana peel, the rumors of the death of backup have been greatly exaggerated. All the ingredients to kill it off exist right now, but IT tends to move at a leisurely pace, so it will take years for those elements to become mainstream.


Innovation in Backup Backup is still one of the areas of IT that’s the least


innovative, causes some of the biggest issues, and costs users time and money. The most exciting thing to happen to backup in a long time was Data Domain, which attacked the universal challenges created by using tape as a backup medium. Tape is unreliable and hard to manage; recovery processes can be complex, inefficient, cumbersome, and prone to error…the list goes on. But the one thing tape has going for it is that it’s cheap. Who doesn’t want an inexpensive insurance policy? It’s only when something goes wrong that you care about how good your insurance policy is. And if that hardly ever, or never, happens, life is good.


Domain changed the economics of backing up to disk


with dedupe, and the rest is history. What they ultimately do is provide much better “insurance” that’s economically compelling. It’s a no-brainer.


Some Things Never Change Whether or not anything exciting is going on with


backup, you still have to back stuff up. You still require software and agents, as well as server, network, and storage resources. And while data continues to grow as databases get bigger and file systems become more massive, you still need to have small backup data sets because who wants to recover a 10TB, 100TB, 500TB, or 1PB data set? Data storage vendors love to talk about limitless file


systems and massive object-based storage systems, but how do you back up that kind of stuff? If you have a large file system, you’re most likely not backing it up, but replicating it. However, finding and recovering data at a granular level is what’s lacking in using replication. If you replicate at the storage system level, you’re using block-based technology. If you use host-based software, then you eat up so much of the server’s resources it becomes impractical and even impossible to replicate large amounts of data with that method.


WWW.PCCONNECTION.COM


VOLUME 4 • ISSUE 1


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36