search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Supplement: Power


Powering the computational tsunami driven by AI, machine learning and big data


Unleash new levels of compute performance with vertical power delivery. Ajith Jain, vice president HPC Business Unit at Vicor, explains more.


A


s high-performance AI processor power levels continue to rise and core voltages decline with advanced process nodes,


power system designers are challenged with managing ever-increasing power delivery network (PDN) impedance voltage drops, voltage gradients across high-current, low-voltage processor power pins, transient performance specifications and power loss. In the case of clustered computing, where tightly packed arrays of processors are used to increase the speed and performance of machine learning, PDN complexity rises significantly as current delivery must be done vertically from underneath the array.


Designing a PDN using the Vicor Factorized Power Architecture (FPA) with current multipliers at the point-of- load instead of legacy voltage averaging techniques, allows a significant step up in performance. This is enabled by the characteristics of point-of-load (PoL) power components: high current density, reduced component count and very importantly, flexibility in placement. PoL power components thus enable current to be delivered laterally and/or vertically to AI processor core(s) and memory rails, significantly minimising PDN impedances.


Understanding the peak current demands with today’s power delivery networks


Modern day GPUs have tens of billions of transistors, a number which is growing with every new generation and product family and which is made possible by smaller process node geometries. Enhancements in processor performance then follow with every new generation, but this comes at the price of exponentially increased power delivery demands. Figure 1 shows the dramatic increase in current requirements due to reduced transistor geometry and core voltages.


Peak current demands of up to 2000A 34 February 2024 Figure 2 — Typical high-performance processor PDN. Components in Electronics


The architecture of a typical power delivery network is highlighted in Figure 2.


Best practices for optimizing the power delivery network The work by the Open Compute Project (OCP) consortium has helped establish a framework of standards for designing rack- and card-based processor developments. The Open Rack Standard V2.2 defines a 48V server backplane and a 48V operating voltage for open accelerator modules (OAMs) used predominately for artificial intelligence (AI) and machine learning workloads. To maintain compatibility with legacy 12V systems, the standard stipulates the ability to meet 12-to-48V and 48-to-12V requirements.


Figure 1 — In most cases, power delivery is now the limiting factor in computing performance as new proces- sors consume ever increasing currents. Power delivery entails not just the distribution of power but also the efficiency, size, cost and thermal performance.


are now a typical requirement. In response to this power delivery challenge some xPU companies are evaluating multi-rail options where the main core power rails are split into five or more lower-current power inputs. The PDN for each of these rails must still deliver a high current while also needing individual tight regulation, which


puts pressure on the density of the PDN and its physical location on the accelerator card. To further add to this complexity, the highly dynamic nature of machine learning workloads results in very high di/dt transients lasting several microseconds. This creates stress across the PDN of a high-performance processor module or accelerator card.


Focusing on powering the processor, or PoL, is fraught with technical challenges. The technical advances highlighted in the previous section focused on the downward trend of voltage scaling, the requirement for tight core voltage tolerance and the upward trend of current consumption. At the board level, the impact of these factors manifests in multiple ways.


The peak current densities encountered are extreme for any PCB. Routing power paths capable of these huge loads demands careful attention. Highly dynamic workloads can create spiking voltage transients, which sophisticated processors find disruptive and potentially damaging. Yet, a processor board has hundreds of other passive components, memory and other ICs essential to its operation that also need placement.


Then there are the I2R losses. Power path trace lengths need to be short. To achieve this the power conversion modules should be close to the processor to reduce trace heating. The likelihood of PCB flexing due to the processor load currents and localized thermal gradients of the processor demand board stiffeners. Also, the converter’s power efficiency specification should be as high as possible to prevent further thermal management challenges.


www.cieonline.co.uk


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62