The semiconductor industry has entered a new era of chip design, moving from monolithic processors to multi-tile or chiplet packages. This kind of disaggregated architecture promises higher flexibility and time-to-market advantage while maintaining cost-effectiveness. This is a big shift not only in client devices and server processors, but I believe, in the automotive sector where semiconductors are enabling a huge transformation.
Intel’s disaggregation journey: The move towards multi-chiplet packages
Disaggregation has been nearly a decade long journey, with innovations in three key technology areas – process, packaging, and architecture. At Intel, the journey began with Haswell, where the processor and supporting chipset tiles were packaged separately. With Kaby Lake-G, we saw an advancement to 2.5D EMIB (Embedded Multi-die Interconnect Bridge) packaging and external foundry silicon for graphics. It also involved integration of third-party IP inside our package, which allowed us to create a smaller form factor for high performance mobile gaming.
Intel’s EMIB is an approach to in-package high-density interconnect of heterogeneous chips. Instead of using a large silicon interposer typically found in other approaches, EMIB uses a very small bridge die with multiple routing layers. It essentially enables the highest interconnect density exactly where it is needed, and a standard packaging substrate can be used for the rest of the interconnect. EMIB technology leads the industry as the first 2.5D embedded bridge solution.
With Lakefield, the first 3D Foveros packaging technology was introduced, along with hybrid architecture and multiple process nodes. Ponte Vecchio integrated 47 different tiles in a single package, using second generation Foveros and EMIB, and incorporating multiple process nodes from different fabs.
Meteor Lake, which is slated for a 2023 release, will be Intel’s first mass-market client SoC to have disaggregated architecture. On the server side, Sapphire Rapids will be the first Xeon product with a modular, tiled architecture.
Advantages of chiplet-based architecture
While process node breakthroughs lead to better performance and lower power consumption on monolithic SoCs, they also come with increasing complexity, higher costs and longer design cycles. That’s why the industry is looking towards chiplet-based architecture.
A chiplet is a piece of silicon designed to integrate with other chiplets through package-level integration, typically through advanced package integration and the use of standardized interfaces. It’s becoming important in semiconductor design due to an explosion in the different types of computing and workloads. There are several different architectures that are emerging to support these different types of computing models.
Adopting the chiplet or tile-based approach allows chipmakers to match the transistor type and optimal process, as different transistors work best on different processes. Splitting the SoC into smaller dies also overcomes the issue of physical limitations, such as reticle size which is determined by lithography equipment in manufacturing. The modular nature of chiplets lends itself to reuse in different projects and enables flexibility so that they can be designed for the needs of specific segments. In terms of performance, modularity allows us to increase core counts by adding more cores (more chiplets) to the processor package. Last but not the least, breaking things down to different IP blocks allows each one to be innovated (and iterated) at a faster pace.
UCIe: An industry initiative to enable the chiplet ecosystem
At Intel, we are building foundry services to make systems comprised of interconnected chiplets from different companies. We envision this new type of foundry not only for chiplets, but also for the packaging of entire chiplet-based systems. Building on our work on the open Advanced Interface Bus (AIB), Intel developed the UCIe (Universal Chiplet Interconnect Express) to define the way chiplets are connected.
UCIe is an open standards body that defines the interconnect between chiplets within a package, enabling an open chiplet ecosystem and ubiquitous interconnect at the package level. Over 80 companies have joined the consortium and Intel is committed to facilitating this chiplet ecosystem. This is a critical step in the creation of unified standards for interoperable chiplets, which will ultimately allow for the next generation of technological innovations.
Intel Foundry Services (IFS) is enabling this ecosystem with its open chiplet platform, which leverages Intel’s leadership packaging capabilities with IP optimized for IFS’s advanced process technologies, combined with services to accelerate customer time-to-market with integration and validation.
The need for chiplets in high performance automotive computing
I believe it’s a winning architecture for enabling the innovations and disruption in the automotive sector which is in the midst of rapid transformation. Marked by acceleration towards autonomous, connected and electric vehicles, this sector will drive significant growth and innovation in semiconductors. Automotive is a key focus area of Intel Foundry Services, and our leadership in chiplet-based solutions will enable us to deliver unparalleled value to customers in their transformation and success.
Mirza Jahan, a senior principal engineer in the automotive solutions group, Intel Foundry Services, says, “Gone are the days when automotive computing was based on mature technology nodes like 65nm or 90nm etc. With the advent of ADAS, Level 3 and Level 4 AD, in-vehicle infotainment, connected car etc., computing demands have significantly increased, making the car a server-on-wheels. This high-performance computing requires advanced process nodes.”
However, monolithic SoCs built on advanced nodes come with increased technology costs and more importantly, a higher defectivity profile than mature nodes. The larger the die, the higher the DPM (defects per million). This imposes a lot of constraints given that automotive is a safety and reliability sensitive segment with a zero DPM target. Larger dies are also limited by reticle size, whereas chiplets give automotive customers the ability to modularize their design.
“Automotive workloads are rapidly changing, with the need to meet requirements for high reliability, high performance, and low latency. The chiplet approach offers the advantage of keeping stable IPs requiring high reliability on older nodes, perhaps even utilizing pre-validated chiplets. At the same time, higher performance and lower latency requirements benefit from being on newer silicon nodes that can be mixed and matched with established older nodes,” says Ramune Nagisetty, senior principal engineer, Intel Technology Development.
“With a modular chiplet-based design, OEMs have the option of choosing an advanced node only for the high-performance blocks like CPU or GPU while the other components can be on a mature node, thus significantly reducing technology cost. This also results in cost savings in IP development, as customers can reuse IP chiplets available in older nodes,” adds Mirza.
One benefit that automotive companies are really excited about is the flexibility of chiplets. Typically, using the monolithic approach, one has to design different SoCs for the economy, mid-tier and luxury vehicle segments. By modularizing the product, it can be customized for each segment by simply adding more chiplets. This not only saves cost but enables faster design and time-to-market.
Chiplets are the way forward
The momentum towards chiplets represents an industry inflection. It is a key pillar of Intel’s IDM 2.0 strategy. Integration of multiple chiplets in a package to deliver product innovation across market segments clearly defines the future of the semiconductor industry. We are excited to be at the forefront of the adoption of this transformative architecture.