Intel foresees the CXL bus enabling rack-level disaggregation of compute, memory, accelerators storage and network processors, with persistent memory on the CXL bus as well.
Intel’s presenter was its Fellow and Director of I/O Technology and Standards, Dr Debendra Das Sharma and he started his session looking at Load-Store IO. This form of IO — loading and storing data into memory locations — is relevant because server memory capacity needs are rising due to the basic requirement to compute more data faster in AI, machine learning and other data-intensive applications such as genomics and big data analytics.
Load-Store IO is faster — much faster — than network IO, which transfers packets or frames of data, and is typically limited to taking place inside a server using CPU-level interconnect. Das Sharma said Load-Store IO physical layer (PHY) latencies are less than 10ns, whereas fast networking PHY latencies are in a >20 to >100ns range.
The aim is to extend Load-Store IO out of the server, and the way to do that is to use the PCIe Gen-5 bus as a base. The memory in connected devices can then be treated as cached, write-back memory by the server processor, and not need a DMA data transfer to move data between devices and the physical server CPU-attached memory. An Intel slide shows this: