A New Era of AI Silicon
ENABLING THE NEXT GENERATION OF ENERGY EFFICIENT AI SYSTEMS
Power-Efficient SoCs Are Essential to Cost Effective and Scalable AI
Reducing Inferencing and Training Power to Expedite the Proliferation of AI in the Datacenter and on the Edge
AI and ML workloads are growing in popularity across many industries, which leads to a great deal of variability in operation types and composition. Additionally, models (language and vision) are growing in size and complexity at an exponential rate. The uncertainty and rapid model growth is driving AI silicon designers to grow performance at unprecedented rates, which has power and time-to-market implications.
While specialized to handle a narrow set of computational operations, AI/ML chips face similar power challenges as CPUs and GPUs. Varying customer workloads create unpredictable voltage droops that drive timing glitches. As designers build larger synchronous regions, simultaneous switching noise create larger ripples in the power network. And the rate of compute growth is expanding top-level clocking power consumption. Each of these power-related issues have a direct impact on BOM costs, system power budgets, and total cost of ownership in terms of power, cooling and deployed density.
In February 2021, Microsoft announced its 17 billion parameter language model T-NLG. Less than one year later, Google shattered that record with its 1.6 trillion parameter model Switch-C, almost a 100x increase in model size. However, hardware isn’t growing as fast. AI silicon companies are constantly fighting against time. As they release and mature one generation of silicon, software has taken an immense leap forward, which is generating renewed focus on improving time-to-market.
The Movellus Aeonic™ product portfolio provides an application-optimized clocking solution for AI silicon platforms that addresses power inefficiencies due to static and dynamic silicon variation . With the ability to adjust for voltage droops , reduce clock power consumption by 50-75% versus a mesh, and reduce timing closure efforts by dynamically compensating for skew and on-chip variation the Movellus Aeonic portfolio enables AI to scale efficiently.