The eFPGA technology is starting to make waves by providing flexibility for AI workloads while facilitating low power for portable applications.
The embedded FPGA (eFPGA) business is starting to make waves by providing flexibility and support for artificial intelligence (AI) workloads. While integrating FPGA functionality into system-on-chip (SoC) designs, eFPGAs allow designers to define the amount of FPGA logic, memory, and DSP processing capabilities.
Figure 1: Compared to standalone FPGA solutions, eFPGAs can reduce device costs by up to 90% and power consumption by up to 75%. Source: Achronix Semiconductor
Two recent industry announcements show how eFPGA’s ability to add new product features and tailor SoCs to specific submarkets or adjacent markets are making them a viable substitute to discrete FPGAs. First, QuickLogic announced to have won a $2 million contract to supply eFPGA IP to an unidentified customer.
On the same day, on 1 September 2021, eFPGA IP supplier Achronix Semiconductor unveiled a partnership with Signoff Semiconductors, a spec-to-silicon FPGA and ASIC design service provider based in Bangalore, India. As a result, Signoff will have direct access to Achronix’s silicon, IP, and support services. The design services company plans to develop AI and deep-learning accelerators, inferencing solutions, and edge IoT processors using Achronix’s FPGA and eFPGA IP technology.
Earlier in March 2021, Achronix announced the sale of 10 million Speedcore eFPGA IP cores shipped in custom ASICs. The Speedcore eFPGA IP—which uses a design process similar to a standard ASIC IP block—is optimized for 5G wireless infrastructure, networking, computational storage, and advanced driver assistance systems (ADAS) chips.
Figure 2: The ArcticPro 2 eFPGA architecture uses a hierarchical routing scheme that strikes the optimum performance and power consumption balance needed for computation-heavy, battery-powered, or other power-sensitive products. Source: QuickLogic
As QuickLogic’s CEO Brian Faith puts it, eFPGA implementation is very low risk for SoC designs. Its eFPGA IP has been implemented in numerous SoCs, MCUs, and discrete FPGAs. Now, when the flexibility inherent in programmable logic is seen as highly suitable for accelerating AI applications, discrete FPGAs are often too expensive for volume applications. Here, the integration of eFPGAs into SoCs saves BOM cost and power consumption.
And inherently low power consumption of eFPGA makes it suitable for a wide range of applications, including handheld and wearable devices as well as IoT endpoints. For AI-centric designs, the eFPGA IP offers the option to integrate fixed function blocks such as embedded RAM and fracturable multiply-accumulate (MAC) to efficiently implement hardware accelerators for neural networks and other computationally intensive AI/ML applications.
Figure 3: FPGA content has reached wearable designs, thanks to the eFPGA technology. Source: QuickLogic
To ensure that chipmakers can seamlessly integrate eFPGA IP into their SoC designs, QuickLogic has qualified its ArcticPro 2 eFPGA IP on the Globalfoundries (GF) 22FDX platform. Next, it has made its ArcticPro 3 eFPGA IP available on Samsung’s 28 nm FD–SOI process.
This article was originally published on EDN.
Majeed Ahmad, Editor-in-Chief of EDN and Planet Analog, has covered the electronics design industry for more than two decades.