CXL: A Journey Into the Kernel
Room A | Thu 22 Jan 4:40 p.m.–5:25 p.m.
Presented by
-
Peter Waskiewicz Jr (PJ) is a Senior Software Engineer in Jump Trading’s Linux engineering division, focusing on Linux kernel and device driver development and embedded systems.
Prior to Jump Trading, PJ spent the majority of his career at Intel, where he was responsible for writing and maintaining several of the Intel Ethernet Linux device drivers, and developing Linux kernel changes for scaling to 10GbE and beyond. PJ was also a Senior Principal Engineer at NetApp in the SolidFire division, where he was the chief Linux kernel and networking architect for the SolidFire scale-out cloud storage platform. He is also an adjunct faculty at Portland State University, teaching OS and Device Drivers in the Electrical and Computer Engineering Department.
PJ also sits on the boards for both the Netdev Foundation, part of the Linux Foundation, and also the NetDev Society.
Peter Waskiewicz Jr (PJ) is a Senior Software Engineer in Jump Trading’s Linux engineering division, focusing on Linux kernel and device driver development and embedded systems.
Prior to Jump Trading, PJ spent the majority of his career at Intel, where he was responsible for writing and maintaining several of the Intel Ethernet Linux device drivers, and developing Linux kernel changes for scaling to 10GbE and beyond. PJ was also a Senior Principal Engineer at NetApp in the SolidFire division, where he was the chief Linux kernel and networking architect for the SolidFire scale-out cloud storage platform. He is also an adjunct faculty at Portland State University, teaching OS and Device Drivers in the Electrical and Computer Engineering Department.
PJ also sits on the boards for both the Netdev Foundation, part of the Linux Foundation, and also the NetDev Society.
Abstract
Compute eXpress Link, or CXL, is a new bus interconnect built on top of the PCIe physical layer. While work on the specifications and initial releases has been ongoing, uptake has been slow. The protocols are robust, making device creation complicated. The industry focus has mainly been on CXL-based memory expansion devices, aka CXL.mem, which are relatively easier to build and support.
Where CXL can truly showcase its efficiency and latency gains over traditional PCIe devices isn't with CXL.mem expansion devices. Building such a CXL.mem/CXL.cache accelerator device is complex, and it requires a number of platform and OS-level pieces to be present and functioning to work.
The focus of this talk is to cover the recent efforts in the CXL Linux kernel upstream community to enable these types of accelerator devices. These devices, known as CXL Type 2 devices, can provide memory and cache-coherent devices with the host CPUs. The talk will touch on challenges with dismantling the existing CXL.mem CXL core in the kernel, and exposing it to allow custom drivers to drive these new bespoke Type 2 devices.
The talk will also touch on future plans to continue Type 2 and Type 1 CXL device support in the Linux kernel. How will CXL.cache devices be treated in the kernel? What about support for CXL 3.x and CXL 4.0 devices and their more advanced feature sets? What else is just beyond the horizon that the kernel will need to evolve to support?
Compute eXpress Link, or CXL, is a new bus interconnect built on top of the PCIe physical layer. While work on the specifications and initial releases has been ongoing, uptake has been slow. The protocols are robust, making device creation complicated. The industry focus has mainly been on CXL-based memory expansion devices, aka CXL.mem, which are relatively easier to build and support.
Where CXL can truly showcase its efficiency and latency gains over traditional PCIe devices isn't with CXL.mem expansion devices. Building such a CXL.mem/CXL.cache accelerator device is complex, and it requires a number of platform and OS-level pieces to be present and functioning to work.
The focus of this talk is to cover the recent efforts in the CXL Linux kernel upstream community to enable these types of accelerator devices. These devices, known as CXL Type 2 devices, can provide memory and cache-coherent devices with the host CPUs. The talk will touch on challenges with dismantling the existing CXL.mem CXL core in the kernel, and exposing it to allow custom drivers to drive these new bespoke Type 2 devices.
The talk will also touch on future plans to continue Type 2 and Type 1 CXL device support in the Linux kernel. How will CXL.cache devices be treated in the kernel? What about support for CXL 3.x and CXL 4.0 devices and their more advanced feature sets? What else is just beyond the horizon that the kernel will need to evolve to support?