Opinion
Data explosion debris At the height of the data explosion, many businesses accepted sprawl as the only viable option in combatting runaway growth. However for many enterprises the problem reached critical mass when energy bills skyrocketed, physical spaces became filled to capacity and green mandates changed the ways in which businesses had to address their energy spending. IT over-complexity eliminates data visibility, while redundant, end-of-life technology soon becomes difficult to identify in mazes of heterogeneous hardware. In the highly competitive energy sector, the issues associated with datacentre sprawl are numerous and critical. Lost or irretrievable data contributes to slashed productivity and reliability, resulting in financial losses, incomplete projects, duplicated effort and the inability to consistently meet stringent data protection service level agreements (SLAs). Worse still are the damaging effects of a loss of business reputation, as valued utility customers, prospects and auditors sense mounting turmoil. Symptoms of data sprawl first become visible internally, as IT managers discern that reports have grown erratic or inaccurate. Soon, business planning becomes difficult and budgets come under strain as funds are funnelled into bolt-on storage technologies. Eventually, data crashes can no longer be addressed through system restoration, while business continuity and data disaster recovery plans become ineffectual and impractical. What is needed is a data protection approach designed specifically to re-energise and transform the backup environments of our largest businesses. In that way energy companies can quickly transform their backup and restore performance without disrupting their existing backup infrastructure, and achieve enhanced flexibility in their data protection environment.
Planning for the future So many methods of data protection promoted in the market today merely serve to amplify these issues of data bloat and sprawl. What enterprise data managers are crying out for are storage
8
www.engineerlive.com
regimes that are agile, fit for purpose and allow them to do more with tight budgets, fewer personnel and reduced space, power and cooling requirements. Te first line of attack is for
comprehensive audits to be conducted on all information resources across an enterprise’s entire data management network. Such snapshots enable IT managers to get a clear picture of what data is stored where, and to reassess the under use or redundancy of rarely- accessed devices. A complete system analysis allows for the construction of data maps, which establish basic transparency and enable IT managers to audit their existing storage devices for efficiency and purpose. In addition, accurate mapping of data storage provides a foundation for updated contingency plans in the event of a data crash.
Ten data needs to be migrated
from end-of-life devices to more efficient systems, opening up additional space by decommissioning outdated architecture and allowing for data consolidation on reliable and stable purpose-built technology.
“With data volumes growing exponentially, and backup windows narrowing, data protection and backup become more challenging.”
Tim Butchart, senior vice president, Sepaton
Te goal is to achieve rapid backups and instant restores that put managers back in control of massive and growing volumes of data. Tat means effortlessly scalable single-system architectures developed specifically for data environments, combined with smarter and faster data deduplication methods. Modular data storage architecture must be coupled with an innovative ‘content aware’ approach to deduplication so that enterprise data managers can add capacity and performance as their needs grow (rather than just throwing more devices into datacentres).
Data duplication Te answer is to deduplicate data in multiple parallel streams and across multiplexed data volumes. Business databases typically store data in small segments of just a few kilobytes that ‘inline or hash-based’ deduplication technologies cannot hope to process without putting the brakes on backup performance (or by simply leaving large volumes un-deduplicated). ‘Byte differential’ deduplication is different. It
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80 |
Page 81 |
Page 82 |
Page 83 |
Page 84 |
Page 85 |
Page 86 |
Page 87 |
Page 88 |
Page 89 |
Page 90 |
Page 91 |
Page 92 |
Page 93 |
Page 94 |
Page 95 |
Page 96 |
Page 97 |
Page 98 |
Page 99 |
Page 100