Intel gives the first details about the PVC successor Rialto Bridge – Hardwareluxx

Intel gives the first details about the PVC successor Rialto Bridge – Hardwareluxx
Written by insideindyhomes

At the International Supercomputing Conference (ISC), Intel spoke about some details of its roadmap that had been kept secret until now. More specifically, the name and first details of the successor to the Xe-HPC accelerator Ponte Vecchio (PVC) have been revealed, which will be called the Rialto Bridge in the tradition of naming by bridges in Venice. In addition to the name, there were first interesting insights into the technical details.

The Ponte Vecchio accelerators have been announced for the second half of 2022. Dozens, hundreds or even thousands of them should be used in supercomputers, but they should also be available from OEMs and SIs. So far, however, we have not seen PVC in the wild.

Intel sees Ponte Vecchio as a token, because it is the first product in a completely new series – in terms of architecture and of course the packaging. The successor Rialto Bridge is seen as a tick that will only experience minor improvements, but whose basic structure will be retained. However, Intel will not be more precise at this point. Only the number of Xe cores is given as 160 – starting from 128 for Ponte Vecchio. Intel also plans to make some changes to the packaging that will balance the Rialto Bridge in terms of how each chiplet is manufactured and the packaging. Although the individual chiplets, or tiles as Intel calls them, are relatively easy to manufacture, Intel has shifted the complexity into the packaging.

In addition to the larger number of Xe cores – it is unknown here whether these will be based on the second generation of the Xe HPC architecture – Intel cites greater capabilities in terms of I/O functions and a new form factor for the OAM V2 Maps (OCP Acceleration Module). These should enable a higher power consumption of the chips used on them.

With Rialto Bridge, Intel will use different production sizes and different manufacturers. So not all chips are manufactured in-house here either. TSMC will probably contribute again.

Intel expects the Rialto Bridge to be introduced in mid-2023. Whether it will be possible to keep to this schedule in view of the current postponements of Ponte Vecchio remains to be seen. The chips should fit into existing OAM boards or be compatible with them. However, Intel envisages a higher power consumption with the Rialto Bridge. Of course, this must be considered when changing the chip. Rialto Bridge will therefore only achieve full performance in the OAM-V2 format.

At the ISC we had the opportunity to take a closer look at a package from Ponte Vecchio. On the one hand we have the pure package here with its 63 tiles – 41 of which are active. But we also see the complete OAM module, which is significantly less complex than NVIDIA’s SMX4 modules. The engineering sample is labeled “PVC 2T OAM 600W”, which on the one hand confirms the TDP rating of 600 W, but also the design consisting of two compute areas that are connected to each other. Originally, Intel was probably planning a 4T variant – i.e. double the expansion level of what we see here.

The OAM module is then placed on a carrier board that holds four modules as shown here. However, Intel is also planning such boards with eight modules.

CPUs and GPUs are merged

The tick in the form of Rialto Bridge will be followed by Falcon Shores, a completely new architecture or a completely new structure for such accelerators. The x86 and Xe cores are flexibly combined. The ratio of x86 and Xe cores is determined by the respective application. Intel is showing Falcon Shores chips complete with x86 cores, those exclusively with Xe cores, but also mixed configurations.

Intel ISC22 keynote briefing
Intel ISC22 keynote briefing

The tiles are manufactured in Intel 20A or smaller. Packaging is done using Foveros Direct or Co-EMIB. Intel expects performance/watt ratios to increase by a factor of five or more. The power density as well as the memory bandwidth and capacity should increase by at least this level. However, the performance and efficiency plus cannot yet be transferred to practice in this form, since Intel mixes improvements with and through the x86 and Xe cores here.

The first Falcon Shores chips are not expected until 2024. It will also be interesting to see how Intel will continue its Xeon product range in parallel. Here there will be chips based on Granite Rapids with performance cores for the current generation of Sapphire Rapids. At the same time, Intel is building up the Xeon product range in two ways and is supplying Sierra Forrest Xeon processors purely with efficiency cores.

Little news on Sapphire Rapids

There is very little that is new about the next Xeon generation, aka Sapphire Rapids. Intel only showed a few benchmarks that should underline the performance increase thanks to the 64 GB HBM2E.

Intel ISC22 keynote briefing
Intel ISC22 keynote briefing

Intel didn’t want to be pinned down to a specific date for Sapphire Rapids. There is still only talk of the second half of 2022. The HBM versions should also continue to be available a little after the standard versions. However, the time interval seems to have been shortened – without specifying it more precisely.

We were also able to photograph the Xeon processors based on Sapphire Rapids at the ISC.

Intel also focuses on the efficiency of the hardware

The Frontier took first place in the Top500 list quite impressively a few days ago. With an energy efficiency of 52.23 GFLOPS/W as a whole system and 62.2 GFLOPS/W, the system sets new standards in this segment. This is where Intel wants to start, because it has been in an area of ​​stagnation for several years. In 2030, data centers are expected to be responsible for 3% of total electricity consumption – in the worst case even for 7%.

Hardware and software as well as the operation of the data center are the parameters that need to be adjusted. Here, too, is a reference to the current number one, which lets its processors and GPUs work at 85 °C with warm water cooling. This increases efficiency, since no power-hungry cooling is used. On the other hand, AMD and the OLCF, as operators of the supercomputer, leave some performance behind.


#Intel #details #PVC #successor #Rialto #Bridge #Hardwareluxx

About the author


Leave a Comment