This page contains a Flash digital edition of a book.
HPC 2012 Storage Storage is at an inflection point, says Barry Bolding, VP of storage and data management at Cray


Tere is little doubt that, as an industry, we are at an inflection point in terms of data management and storage. Looking at the way storage has developed during the past few years,


we see that there has been incremental growth in features, functions, storage size and speed, and soſtware, but no dramatic leaps forward. Tere is certainly more data out there and a greater need to improve the speed at which it’s moved in and out of compute and into long- term storage, but the way it has been stored in file systems has remained fairly consistent. Within parallel file systems, for example, speeds have steadily increased from 10 to 20, 50 and now 100 GB/sec of I/O bandwidth, but the fastest system, located at Oak Ridge National Laboratory in the US, has held that position for many years. As we move into 2013, however, we are


will be challenging as we try to move features and functions from the labs into production environments. Te difficulty is that while there are a


“Tere is certainly more data out there, and a greater need to improve the speed”


beginning to see exponential growth as Big Data impacts both commercial and government spaces. At Cray, for instance, we are currently building file systems and environments that are in the order of 10 times larger than what we were building a year ago, and the next 12 months


number of parallel file systems on the market, such as Panasas, Lustre or GPFS, no one solution has met all users’ needs. Tis year was a turning point for Lustre, which has gained a lot of stability from major backers within the industry through the establishment of the OpenSFS community, which includes such key backers as Cray, DDN, Intel, ORNL, LLNL and Xyratex. Te open source community that has been established as a result is validating Lustre’s ability to move with the inflection point we are seeing. Lustre’s stabilisation has been a definite highlight of 2012, and I believe that in 2013 the vast majority of HPC systems will use it, and it


will be interesting to see that move into the commercial space. Te situation really is snowballing at the


moment, as the better you become at moving and storing the data, the more analyses you can run; the more analyses you can run, the more data you need to store. It really is feeding upon itself and within HPC, and I believe this will challenge us to focus on soſtware as there are still


John Gilmartin, VP of product marketing at Coraid, offers his views of software-defined storage and the dynamic data centre


A study by Wikibon1 reports that Big Data will be a $50 billion business by 2017 and as the ability to analyse larger data sets continues to improve, the demand for data storage


will increase further. IT departments are faced with the challenge of staying responsive to growing and accelerating business needs while keeping costs low. Public cloud solutions such as Amazon


EC2 offer an easy, effective and affordable alternative to slow enterprise IT, but with associated business risks. To effectively tackle these challenges, CIOs are increasingly turning to highly virtualised and automated cloud architectures that can be scaled on demand and managed with ease. Cloud architectures offer just-in-time scalability with self-service provisioning and a utility-computing model for resource consumption. Tey enable business agility while lowering the total cost of


ownership. In recent years, we have seen hybrid cloud deployments really take off as CIOs get more comfortable with moving low-risk data into cloud environments, while keeping sensitive data in-house. Cloud adoption in turn has


ushered a fundamental shiſt in the way infrastructure is deployed in data centres. Modern data centres have given up on monolithic servers and storage that are expensive and complex to manage in favour of pools of standards-based commodity hardware that are managed and automated through soſtware. In the past, people managed small data in big boxes, but now they are managing Big Data in many small boxes. Te challenge is transforming all those small boxes into one logical system. Tis is a fundamentally different management challenge and we believe soſtware- defined storage is the solution. Soſtware-defined storage (SDS) is an


“IT departments are faced with the challenge of staying responsive to growing and accelerating business needs while keeping costs low”


architecture for storage systems that allows data- centre architects to pool and abstract storage infrastructure built on scale-out commodity hardware by enabling complete automation and programmability of the storage network. Tis offers greater flexibility in how storage is configured and managed and lowers management overhead. Te automation of storage


management workflows greatly simplifies operations and allows IT to be responsive to business needs. Soſtware-defined storage can be integrated with other layers of infrastructure in a soſtware-defined data centre that democratises the


dynamics of Amazon, Google and Facebook data centres. Te upsides include significantly lower operational and capital expenditures and, even more importantly, soſtware-defined systems can redefine business agility.


1 http://wikibon.org/wiki/v/Big_Data_Market_Size_and_Vendor_ Revenues


21


many aspects that need addressing such as data movement, the reliability of applications and the handling of disk failures. Te soſtware development we’ll be seeing in


the next few years will harden the ecosystems around file systems like Lustre, but having the right expertise on hand will be critical. We’ve been bringing in more and more architects and developers to Cray, exemplified by our recent hiring of key personnel from SystemFabricWorks. What we’re coming across at the big labs is now becoming the problem of medium-sized businesses, and storage and data movement expertise is currently at a premium. One final point, is that while I do feel


soſtware is the more critical aspect, hardware technologies are also important. Solid state drives (SSDs) will continue their incremental growth in 2013 and become a larger part of the HPC technology base. Any major increase in their deployment is currently being hindered by the fact that many people haven’t quite worked out the optimal way to use them. Secondly, the price performance of alternative technologies hasn’t yet forced a crossover to SSDs. Take tape, for example: people have been predicting the death of tape for many years, but I think it’s even more important today than it was a year ago given the sheer amount of data being generated.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32