search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
HIGH PERFORMANCE COMPUTING SoftIron FEATURED PRODUCT


and work really hard on developing the model and the firmware for either VHDL or Verilog”


SoftIron’s HyperDrive Storage Manager


If you’ve previously dismissed Ceph due to its complexity, now’s the time to think again. SoftIron’s mission is to simplify software-defined storage (SDS) and just one of the ways we’re doing this is with HyperDrive, our dedicated Ceph storage platform, purpose-built for scale-out, enterprise-level storage. HyperDrive is optimised


for Ceph and is a powerful solution for SDS with custom-built storage appliances, a world-first intuitive management tool and an intelligent gateway services router that supports


on machine learning with FPGAs from the Microsoft brainwave team, and seeing on Github some very simple machine learning inference code written by EJ Kreinar using Xilinx’s Vivado HLS tool. The combination of those two got us very excited, because we could actually see the potential to do this hls4ml project to enable fast ML- based event triggers.’ The team set out to develop and benchmark a tool flow, based around Xilinx Vivado HLS, that would shorten the time needed to create machine learning algorithms for the CMS level one trigger. The hls4ml tool has a number of


multiple protocols across the enterprise. HyperDrive Storage Manager is a sleek, feature- rich management tool that allows administrators to simplify and centralise the management of HyperDrive and Ceph, as well as providing visibility into storage health, current utilisation and predicted storage growth. It’s ideal for storage administrators or help desk personnel who are responsible for managing storage configuration or shares. Book a HyperDrive


Storage Manager demo here, or visit softiron.com for more.


configurable parameters that enable users to customise the space of latency, initiation interval, and resource usage tradeoffs for their application. Prior to the team’s work to create hls4ml, physicists would have to manually create simple trigger algorithms, and engineers would then program the FPGAs in Verilog or VHDL. This was a very time- consuming process that could take several man-months of work by expert physicists and engineers. Tran said: ‘We envisioned


at a very high level putting neural networks into the level one trigger. Nobody had really considered the


www.scientific-computing.com | @scwmagazine


possibility of generically putting neural networks of all different types there. Once you give that capability to the community, then it can be everywhere. We’re seeing it in muon identification, tau leptons, photons, electrons – all the particles that we see – we can improve the performance using these more sophisticated techniques.’ Raising the level of


abstraction with hls4ml allows the physicists to perform model optimisation with big data industry-standard open source frameworks such as Keras, TensorFlow or PyTorch. The output of these frameworks is used by hls4ml to generate the FPGA acceleration firmware. This automation was a big time saver. Tran said: ‘Electrical engineers are a scarce resource in physics and they’re quite expensive. The more we can use physicists to develop the algorithms and electrical engineers to design the systems, the better off we are. Making machine learning algorithms more accessible to the physicist helps a lot. That’s the beauty of why we started with HLS and not the Verilog or VHDL level. Now, we can do the whole chain from training to testing on an FPGA in a day.’ Harris explained how


physicists’ search for dark matter using machine learning algorithms when they don’t know what it actually looks like, in order to train the neural networks. ‘We make a hypothesis for what it will look like, and write down a list of all the signatures we would expect for dark matter,’ said Harris. ‘We’re training on a very generic class of signatures.


For example, dark matter, by its nature, will be missing energy in the detector because it will go right through it. If we can use machine learning techniques to optimise the performance to understand missing energy, that improves our sensitivity to dark matter as well,’ said Tran. The team uses multi-layer


perceptron neural networks with a limited number of layers to meet the 100 nanosecond, real-time performance requirements of triggers. In addition to AI inference, FPGAs provide the sensor communications, data formatting and pre-filtering compute required for the incoming raw sensor data, prior to the inference driven trigger; thereby accelerating the whole detector application. Tran summarised the benefits of the hls4ml project. ‘In our day-to-day work, it really allows us to access machine learning at every level across the experiment with the trigger. Before, you would have to think about a very specific application and work really hard on developing the model and the firmware for either VHDL or Verilog. Now, you can think more broadly about how we can improve the physics, whether it’s low-level aggregation of hits in some calorimeter all the way up to taking the full event and optimising for a particular topology. It allows the spread and adoption of machine learning more quickly across the experiment.’


Project catapult Microsoft’s Project Catapult is another example of how FPGAs are being used to specialise computing


June/July 2019 Scientific Computing World 9


”Before, you would have to think about a very specific application


g


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32