This page contains a Flash digital edition of a book.
HPC NEWS


Microsoft releases major update to Windows HPC Server


There’s an industry joke that Microsoft gets things right the third time around, and some evidence for this might be the third spin of Microsoft Windows HPC Server, writes Paul Schreier.


The main goal, explains Bill Hilf, general manager of Microsoft’s Technical Computing Group, is to help mainstream users take advantage of parallel hardware whether for a client, cluster or cloud. ‘We want to focus on the user rather than just the “plumbing” alone,’ he explains. At the client level, points out Hilf, advanced debugging tools and compilers are increasing performance. He points out that by changing just one line of code in an example program it’s possible to access multiple cores


HPC Products


l Amax has introduced its ClusterMax Stor-X, a high capacity, petabyte-scale network-attached storage (NAS) platform that the company says has redefined the limits of access speeds, high availability, performance and scalability. The company states that the ClusterMax Stor-X is well-suited to data intensive deployments in oil and gas applications, which require extreme performance, massive storage capacity and robust parallel file storage management features. www.amaxit.com l SGI has announced the release of the SGI InfiniteStorage 16000, the next generation of high-performance storage platforms aimed at mixed workload environments. The company states that the new storage system delivers optimum mixed I/O options in a scalable and dense platform. To


14


date, premium storage systems fall into two categories: those optimised for random I/O operations per second (IOPS) or those optimised for bandwidth. www.sgi.com l NextComputing is now integrating and supporting the latest line of LSI 6Gb SATA+SAS high-port count storage controllers, including the LSI MegaRAID SAS 9260- 16i RAID controller. By offering this storage controller technology from LSI, the company can pack substantial internal storage arrays into its family of briefcase-sized portable servers, including the recently announced NextDimension Evo Plus. www.nextcomputing.com l The MathWorks has announced support for Nvidia GPUs in Matlab applications using Parallel Computing Toolbox or Matlab


SCIENTIFIC COMPUTING WORLD DECEMBER 2010/JANUARY 2011


Distributed Computing Server. This support enables engineers and scientists to increase the speed of many of their Matlab computations without performing low-level programming. wwww.mathworks.com l T-Platforms, in conjunction with Nvidia, has introduced its TB2-TL heterogeneous computing system, which it says represents a new breed of high-density HPC systems with an industry-leading performance-per- watt ratio. The first system to use the Nvidia Tesla X2070 GPU, the TB2-TL is the densest HPC solution on the market. The combination of the T-Platforms T-Blade 2 packaging and Nvidia Tesla 20-series GPUs enables 1-petaFLOPS+ level performance in only 10 standard racks. www.t-platforms.ru l Tech-X has released version 1.4 of its GPULib library of mathematical


functions that facilitate the use of modern GPUs for HPC tasks. New to GPULib v1.4 is support for Cuda streams, enabling concurrent execution of multiple kernels. The product also supports asynchronous data transfer, and leverages new features of IDL v8.0, which enable tighter integration of the two products. www.txcorp.com l AccelerEyes has released version 1.5 of Jacket, its GPU programming platform for Matlab. The release offers expansion of Jacket’s function library, performance enhancements for a number of blockbuster functions, multiple enhancements to GFOR support, and formal support for GCOMPILE and GPROFVIEW. Known bugs and other issues have also been addressed with this release. www.accelereyes.com


www.scientific-computing.com


and achieve a 4x performance boost. With such tools, Microsoft is trying to ‘raise the boat’ for all programmers. At the cluster level, he notes the troubles normal users have in managing and deploying HPC clusters. Here R2 allows the deployment of more than 1,000 nodes using a number of enhanced cluster-management features. Further, with node templates you can also create GPGPU node groups within the same environment. It’s also short work to add Windows 7 desktops to a cluster via node templates in which you can set up policies; it’s even possible to do ‘cycle savaging’ on desktop systems within a cluster for additional power.


He points to HPC Server Services for Excel as a prime


example. R2 now enables running multiple instances of Office Excel 2010 in a Windows HPC cluster where each instance executes an independent calculation or iteration from the same workbook with a different dataset or parameters. Using a special tab in the Excel user interface, you can have that software spread the work from a workbook among a large number of nodes, and it’s even possible to close the client software and have ‘broker nodes’ send an e-mail notification when their work is done so when you re-open the workbook the results are waiting for you. At the cloud level, Hilf notes


that it’s convenient for some users to access external public resources across the public


network only when needed. He points to examples of ‘bursting out to Azure’ (Microsoft’s own cloud-computing platform) when needed, sometimes even in a mixed-mode burst from on-premises to public resources when the workload demands it. Hilf presented much of this information at the High Performance Computing Financial Markets conference, but the benefits for scientific users are also very promising. For instance, Josh Kunken of the Scripps Research Institute has been transitioning to R2 in high-throughput image analysis for their cancer initiative and notes that they’ve experienced an 800 per cent increase in performance with R2’s parallel capabilities and that doing so was quite straightforward.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36