How "Point of Load"​ VRs are no Longer at the Point of Load for CPUs and GPUs
The last Inch in power delivery to a server CPU/GPU is the longest distance to overcome.

How "Point of Load" VRs are no Longer at the Point of Load for CPUs and GPUs

Point of Load (PoL) was coined to describe a decentralized power architecture scheme where the power regulators are placed (as the name suggests) near the point load vs all regulated voltage rails being distributed from a single centralized point. This decentralized power and PoL works well for most applications. For higher current devices, like the latest CPUs and GPUs (or XPUs) out there, PoL falls short of being at the load. That is, as voltages have dropped and currents have increased such that the ~1" or more distance and its related impedance from the output of a VR to the XPU is now limiting performance in many designs. This last inch might as well be termed that last mile due to the challenges it creates with power delivery.

The last inch represents not only current traveling through the PCB but also up through a socket to the XPU. In many server designs we see the resistive losses of this last inch anywhere from 400u ohms to 900u ohms. If you consider a 250A XPU, that represents a 25W to 56W in losses. In addition to the power loss, there is also the degradation on transient performance. For that same 250A XPU, the last mile adds between 100mV and 225mV to the transient response. Without the ability to overcome this by adding capacitance next to the XPU, the only recourse is to raise the operating voltage of the XPU, resulting in even more power dissipation. Ironically, optimizing and accounting for this last inch encompasses the majority of time in a server power design.

Moving the VR from the server PCB to inside the XPU socket would eliminate this last inch and enable a "power on package" scheme which provides true PoL power to the XPU. To date, several suppliers have shown examples of power resident within the XPU socket/package or specifically on the XPU substrate within the socket. These designs have shown merit but at the cost of increasing the complexity of the XPU substrate design. These designs have also not been able to demonstrate a dramatically lower current delivery from the motherboard to the socket meaning the last inch is still a critical design aspect to deal with.

At Vicor, we also have to deal with the last inch when working with customer designs. However , the Vicor 48V Direct to CPU solution lends itself to a power on package scheme that eliminates all of the negative effects of this last inch and unlike schemes attempting to adopt a conventional multi-phase VR topology.

The Vicor 48V Direct to CPU design utilizes two highly integrated power path products (the PRM &VTM) with only the VTM needing to be placed at the point of load. This enables the ~48V (and low current) output of the PRM to be directed into the XPU package to the VTM and provides the following specific design benefits:

  • Extremely low EMI of the VTM enables close proximity to a XPU
  • Reduction of over 95% of dedicated power pins - due to the higher voltage/ low current delivery from the PRM to VTM
  • Single component level placement (VTM) on the XPU substrate
  • No modification of the substrate needed for magnetic structures or multiple component placement - VTM is self contained

As previously noted, eliminating the last inch:

  • Eliminates the server motherboard resistive losses due to the high current distribution
  • Improves transient performance and dramatically reducing the amount of capacitance as compensation is no longer needed for the last inch

In addition, eliminating the last inch:

  • Enables lower copper weight on the server PCB - creating a cost savings
  • Decreases server board design time - no longer needing to design around the last inch
  • Frees up space around the outside of the XPU socket, enabling better routing and faster performance of high speed signal interfaces to nearby memory or co-processors - due to the reduction in motherboard capacitance and elimination of external VR
  • Frees up space around the outside of the XPU socket enabling motherboard signal routing flexibility around the XPU - due to the reduction in motherboard capacitance and elimination of external VR


References / Additional Reading

Google and Facebook share proposed new Open Rack Standard with 48-volt power architecture, Google Cloud Platform Blog, August 4, 2016

Efficiency: How we do it, Google Data Centers

Google, Intel Prep 48V Servers, EETimes, January 21, 2016

48V Direct to CPU - Solving the Challenges in CPU Power Delivery, Vicor Corporate Website

 Introducing Zaius, Google and Rackspace’s open server running IBM POWER9, Google Cloud Platform Blog, Friday, October 14, 2016 

48 V: the new standard for high-density, power-efficient data centers , Electronic Products August, 2016

Optimized Power Delivery Architecture for Data Center Scale Server Application, Open Compute Project Summit, March 2016







To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics