Bringing Reliable, Power-Efficient SoC Solutions to the Cloud
Tackling power challenges on high-performance switches, AI inferencing engines, CPUs, and CXL memory expanders.
The biggest challenge in cloud computing is delivering the performance needed to handle increasing workloads while minimizing power consumption. Today, data centers consume about 2% of electricity worldwide, and this is increasing dramatically. Networking and server chips remain the primary factor in determining data center performance, power consumption, and cost. Although designers leverage new process nodes, multi-die designs, and new architectures, immense leaps in performance keep driving chip-, system-, and datacenter-power increases.
Data center processors face unique power challenges given their various workloads. At one moment, these chips may be at 10% utilization and suddenly ramp to 50% because of a workload swing. These rapid shifts induce localized and chip-wide voltage droops and simultaneous switching effects, creating ripples across a power network. To prevent timing glitches, designers may margin their voltage budget. However, this can quadratically increase power.
One way to reclaim power efficiency while improving performance is to address the droop issues through adaptive clocking, which scales frequency during voltage fluctuations. The technique requires a highly responsive programmable clocking solution and a droop detection mechanism. The latter may be optional if architects have deterministic workloads that can be managed with prior knowledge, which is rare for merchant silicon providers. In either situation, design teams can reclaim voltage margin, reducing system power while maintaining or increasing system performance through adaptive clocking.
The Aeonic™ platform resolves these issues by providing application-optimized digital components that intelligently orchestrate timing across an SoC. Aeonic enables power-efficient, high-performance solutions for cloud computing.