storage
! " CPU, cache & network ! " Orchestrates system activity ! " Clustered metadata services
! " CPU, cache, data storage ! " Enables parallel reads/writes ! " Advanced caching algorithms
! " 60TB per 4U chassis ! " Scalable to 6PB ! " Up to 1.15GB/chassis (ActiveStor 11) ! " New storage integrates seamlessly ! " Attractive total cost of ownership
reads is 1,150MB/s – the company claims that this is the highest per-disk performance on the market. It includes RAID data protection as a component of the fi le system, and system- managed parallel rebuilds mitigate the risk of successive drive failures by providing rebuild times that actually decrease as the size of the storage pool increases. The product also permits a range of RAID confi gurations for different data fi les, even within the same volume and storage pool, and performance and reliability characteristics can be tuned on a fi le-by-fi le basis to satisfy data protection requirements of specifi c storage environments.
! " 10GbE & IB networks ! " Field replaceable units
9
The ActiveStor architecture from Panasas is typical of today’s storage appliances, where users can swap out storage blades while the system is operating and can add storage at any time
scale to 16PB. The appliance supports all major fi le protocols concurrently, so users can consolidate Windows, Mac, Linux and Unix fi le data in a single managed pool of storage, regardless of protocol (CIFS for Windows or NFS).
Multiple host operating systems This approach of allowing multiple hosts to tap storage, no matter what operating system they run, is also a characteristic of today’s storage appliances. To make it easier to deploy and use its StorNext shared SAN fi le system, which enables simultaneous fi le access across heterogeneous platforms, Quantum has introduced the M330 metadata controller appliance. It supports a variety of host operating systems and client types, including multiple fl avours of Unix and Linux, as well as Windows and Mac OS X. Housed in a 6U box, it includes two
metadata controllers (one for failover) and one dedicated metadata array. Also included are 10 fi le system SAN client licenses, and two SAN client licenses for the metadata controllers. The appliance can simultaneously share and access ‘big data’ fi les by using Fibre Channel speeds over a SAN, as well as automatically tiering data to disk or tape for archiving. The system is optimised for large fi les in that it addresses streaming performance rather than IOPS. It is possible to start streaming and editing a large fi le before that fi le is totally ingested, and you can access the data from multiple operating
42 SCIENTIFIC COMPUTING WORLD
systems and perform multiple tasks on large fi les. There is a limit of four fi le systems on the M330, but no limit on capacity. The virtualised shared storage makes everything look as if it were local storage. Powered by the PanFS operating system,
ActiveStor 11 from Panasas is a parallel storage system appliance that easily scales via a modular, buy-as-you-grow blade architecture based on industry standard disk storage. It seamlessly scales up to 6PB of capacity and 115GB/s of throughput from a single global namespace. The ActiveStor 11 was designed for peak capacity, whereas the ActiveStor 12 was designed for maximum performance; it runs 50 per cent faster but
Embedding applications on storage appliances For its part, DDN is taking the appliance approach a step further by making it possible for users to install their software on the bare storage metal. The company’s initial launch in the appliance area, the SFA10000 in 2009, did not feature embedded capability – that feature came just recently with the SFA100000E (also known as the SFA 10KE). It runs a hypervisor (hardware virtualisation techniques that allow multiple operating systems to run concurrently on a host computer) within the storage array, and it fences off I/O subsystems so you can change the stack independent of the OS. It also embeds either the EXAScaler fi le storage system (DDN’s packaging of the Lustre fi le system) or the GRIDScaler (which is based on the GPFS parallel fi le system). Returning to the embedded software,
VP of marketing at DataDirect Networks (DDN), Jeff Denworth, refers to one oil and gas company that embedded pre- and post-processing applications native on the storage system. This cuts out network traffi c
THE APPLIANCE CAN SIMULTANEOUSLY SHARE AND ACCESS ‘BIG
DATA’ FILES BY USING FIBRE CHANNEL SPEEDS OVER A SAN, AS WELL AS AUTOMATICALLY TIERING DATA TO DISK OR TAPE FOR ARCHIVING
has a capacity of 40TB per rack rather than 60TB. Administrators can add new storage to the namespace from a single point of management in fewer than 10 minutes without disrupting workfl ows, and client support is provided for Linux, Windows and Unix. This NAS (network attached storage) appliance uses a blade chassis approach, and in a 1 + 10 blade confi guration in one rack, it has a capacity of 60TB using 20 SATA disk drives and 48GB of ECC memory. Throughput and IOPS scale linearly with
capacity; on a single rack the maximum throughput for writes is 950MB/s and for
for lower latency, and he notes that this is a ‘fairly enabling platform for “CPU- light” applications’. The 10KE provides 12 cores and performs mostly background processing. As for the hardware itself, volume scalability consists of 600TB of storage usable in fi ve enclosure systems with 300 drives, and doubling the number of drives in 10 enclosures increases the number to 1.2PB of usable storage. In terms of performance, the system specs 6GB/s per 10KE system (read and write) aggregate performance up to 200GB/s, and hundreds of thousands of fi le operations/sec.
www.scientific-computing.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48