Taking AI to the Edge
Enabling A New Era of Edge Computing
Power-Efficient SoCs Are Essential to Edge AI
Edge computing refers to the processing and storage
that happens at or near devices far from the cloud.
Locating the processing closer to the data source improves latency, bandwidth, security, privacy, and cost. The availability of cost-effective AI inference is expanding edge computing. AI inference allows more capable and useful local processing of data.
Edge AI brings high-performance voice, video and vision interfaces to set-top boxes, smart displays, soundbars, smart speakers, home security systems, and smart appliances. It can enable 5G private wireless networks to supercharge the industrial internet of things. Edge computing is essential for advanced driver-assistance systems (ADAS) and autonomous vehicles. The latency incurred by sending data to and from the cloud would be prohibitive, especially for safety critical functions.
With AI performance measured in tera-operations per watt, adding AI to the edge can magnify power consumption. Implementing an intelligent clocking network can improve system performance and reduce power consumption at the SoC-level, addressing efficiency on multiple levels.
Maestro provides an all-digital application-optimized clocking solution that addresses inefficiencies due to on-chip variation, jitter, clock skews, setup and hold violations, peak current, and switching noise. It can optimize clocking based on workloads and operating conditions. Additionally, it allows chip designers to speed up or slow down regions of the silicon to optimize power or performance or both.