search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
High-Performance Computing 2019-20


HPC Yearbook 19/20


the dispersed information into centralised storage for more in-depth evaluation. Meaningful output derives from


meaningful input. So data preparation, or pre-processing of data for analysis, is another crucial step. Corrupted or incomplete data may require additional polishing. Data Ingestion involves identifying the


best approach to gather and apply relevant data to AI, analytics, simulation or other workflows. Te method must accommodate data characteristics such as structure, source, and size. Streamed data, for instance, passes a continuous flow of asynchronous messages which do not require a ‘reply’. However, in industries like security monitoring, information is needed instantaneously for AI to mine it for real-time insights. In a case like this distributed compute systems at the edge, or in virtual machines, prove mission-critical for faster data processing. With ‘good’ data, the right storage


Planning, procuring, implementing and managing an HPC system takes time. An organisation must embrace a ‘data-centric’ culture





down dynamically, using only the compute resources their workflows need. Organisations with on-premise HPC


systems also benefit from HPC as a Service (HPCaaS) since it can operate as an on- the-fly supplement for their existing HPC system. By ‘sandboxing’ test environments, or placing secondary, less critical workloads in the cloud, organisations can reserve their on-premise HPC systems for business-critical tasks. And in some cases, an enterprise can take advantage of HPC in the cloud to run their entire business. One example is Astera Labs, a fabless semiconductor manufacturer whose HPC infrastructure supporting chip design and other business workloads is hosted entirely on AWS. Cloud storage, too, can offer essential


benefits in an HPC environment. Hosted storage provides a practical and elastic means to manipulate and process the vast quantities of data which HPC and AI dictate.


Embracing HPC as a Service


Since HPCaaS solutions offer customers the scalability and speed necessary for heavy workloads, the physical location of an HPC system is increasingly irrelevant. HPCaaS


www.scientific-computing.com/hpc2019-20


is available from the largest cloud providers, including Amazon, Microsoſt and Google. However, even some smaller providers host HPCaaS. For example, Iceland’s Advania offers its clients HPCaaS solutions based on Intel Select Solutions for HPC and AI convergence that are fully customisable, virtualised ‘bare- metal’ systems with end-to-end encryption. While storing many terabytes of data


creates a technical challenge, making the best sense of that data volume represents an even more significant hurdle. Since meaningful outcomes begin with useful data, organisations need to identify, characterise, tier and optimise data to set themselves up for success. Tey must also consider factors like data security, regional compliance requirements, how to monetise that data, and more. Ultimately, data preparation involves four


steps: identification, preparation, ingestion, and storage. Identification includes gathering and


understanding various data types which contain meaningful information. While information obtained from databases or IoT devices is structured, other data, such as images, web pages or documents, is unstructured. Regardless of data type, an organisation can benefit from consolidating


medium for it represents an important consideration. For less used, ‘cold’ data, a slower storage medium onsite, or in the cloud, may serve an organisation’s needs since latency is less critical. However, ‘warm’ and ‘hot’ data require faster access for real- time information. In cases like this, solid- state drives can cost-effectively store large data volumes, and fast memory modules can accelerate access to the most critical data needed for real-time information.


Your existing HPC system may be AI-ready


While enterprises and research institutions today recognise the strategic contributions HPC and AI offer, some are slower to adopt. Perceptions like the need for specialised hardware, soſtware and human skills can slow the pace of AI deployment. However, those who invested in HPC infrastructure with underlying Intel architecture may be closer to AI than they realise. Second generation Intel Xeon scalable processors feature AI ‘built-in’ with enhancements like Intel DL Boost, which helps accelerate workflows involving deep learning. If your organisation has not yet evaluated


the benefits which AI and converged workloads can offer, now may be the time. For more detail about easing AI into your existing infrastructure, see the white paper, Ease your organisation into AI: A practical guide to building an insights-driven business.n


1. Data Age 2025 sponsored by Seagate using data from IDC Global DataSphere, November 2018


Rob Johnson owns Fine Tuning, a strategic marketing and communications consulting company in Portland


23


Shutterstock


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32