diff --git a/configs/AM62AX/AM62AX_linux_toc.txt b/configs/AM62AX/AM62AX_linux_toc.txt index 4d3f9ce8c..76c2efb6b 100644 --- a/configs/AM62AX/AM62AX_linux_toc.txt +++ b/configs/AM62AX/AM62AX_linux_toc.txt @@ -73,6 +73,7 @@ linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-CBS linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-IET linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-TSN-Tuning linux/Foundational_Components/Kernel/Kernel_Drivers/Network/NETCONF-YANG +linux/Foundational_Components/Kernel/Kernel_Drivers/Network/XDP linux/Foundational_Components/Kernel/Kernel_Drivers/PMIC/pmic_tps6594 linux/Foundational_Components/Kernel/Kernel_Drivers/PWM linux/Foundational_Components/Kernel/Kernel_Drivers/SPI diff --git a/configs/AM62PX/AM62PX_linux_toc.txt b/configs/AM62PX/AM62PX_linux_toc.txt index eff335a52..5512cd3ec 100644 --- a/configs/AM62PX/AM62PX_linux_toc.txt +++ b/configs/AM62PX/AM62PX_linux_toc.txt @@ -75,6 +75,7 @@ linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-CBS linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-IET linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-TSN-Tuning linux/Foundational_Components/Kernel/Kernel_Drivers/Network/NETCONF-YANG +linux/Foundational_Components/Kernel/Kernel_Drivers/Network/XDP #linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_End_Point #linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_Backplane #linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_Root_Complex diff --git a/configs/AM62X/AM62X_linux_toc.txt b/configs/AM62X/AM62X_linux_toc.txt index 96e533da1..89104d195 100644 --- a/configs/AM62X/AM62X_linux_toc.txt +++ b/configs/AM62X/AM62X_linux_toc.txt @@ -72,6 +72,7 @@ linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-CBS linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-IET linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-TSN-Tuning linux/Foundational_Components/Kernel/Kernel_Drivers/Network/NETCONF-YANG +linux/Foundational_Components/Kernel/Kernel_Drivers/Network/XDP #linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_End_Point #linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_Backplane #linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_Root_Complex diff --git a/configs/AM64X/AM64X_linux_toc.txt b/configs/AM64X/AM64X_linux_toc.txt index 493e99c6f..67ee2b775 100644 --- a/configs/AM64X/AM64X_linux_toc.txt +++ b/configs/AM64X/AM64X_linux_toc.txt @@ -66,6 +66,7 @@ linux/Foundational_Components/Kernel/Kernel_Drivers/Network/NETCONF-YANG linux/Foundational_Components/Kernel/Kernel_Drivers/Network/HSR_PRP_Non_Offload linux/Foundational_Components/Kernel/Kernel_Drivers/Network/HSR_Offload linux/Foundational_Components/Kernel/Kernel_Drivers/Network/PRP_Offload +linux/Foundational_Components/Kernel/Kernel_Drivers/Network/XDP linux/Foundational_Components/Kernel/Kernel_Drivers/SPI linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_End_Point linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_Root_Complex diff --git a/configs/J7200/J7200_linux_toc.txt b/configs/J7200/J7200_linux_toc.txt index 5b8f2863b..c98169635 100644 --- a/configs/J7200/J7200_linux_toc.txt +++ b/configs/J7200/J7200_linux_toc.txt @@ -66,6 +66,7 @@ linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-EST linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-CBS linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-IET linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-TSN-Tuning +linux/Foundational_Components/Kernel/Kernel_Drivers/Network/XDP linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_End_Point linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_Root_Complex linux/Foundational_Components/Kernel/Kernel_Drivers/PMIC/pmic_tps6594 diff --git a/configs/J721E/J721E_linux_toc.txt b/configs/J721E/J721E_linux_toc.txt index 039c31493..feafee487 100644 --- a/configs/J721E/J721E_linux_toc.txt +++ b/configs/J721E/J721E_linux_toc.txt @@ -70,6 +70,7 @@ linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-EST linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-CBS linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-IET linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-TSN-Tuning +linux/Foundational_Components/Kernel/Kernel_Drivers/Network/XDP linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_End_Point linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_Backplane linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_Root_Complex diff --git a/configs/J722S/J722S_linux_toc.txt b/configs/J722S/J722S_linux_toc.txt index 45115a6aa..bcd6bb366 100644 --- a/configs/J722S/J722S_linux_toc.txt +++ b/configs/J722S/J722S_linux_toc.txt @@ -67,6 +67,7 @@ linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-EST linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-CBS linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-IET linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-TSN-Tuning +linux/Foundational_Components/Kernel/Kernel_Drivers/Network/XDP linux/Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_Root_Complex linux/Foundational_Components_Power_Management linux/Foundational_Components/Power_Management/pm_dfs diff --git a/source/linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-Ethernet.rst b/source/linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-Ethernet.rst index ea929dff3..fd1af64ba 100644 --- a/source/linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-Ethernet.rst +++ b/source/linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-Ethernet.rst @@ -90,5 +90,5 @@ For further details regarding the TSN features and testing, refer :ref:`tsn_with XDP and zero copy """"""""""""""""" -The CPSW Ethernet Subsystem supports XDP and zero copy features similar to PRU-ICSS Ethernet Subsystem. -For more details refer :ref:`pru_icssg_xdp`. +The CPSW Ethernet Subsystem supports XDP Native Mode, XDP Generic Mode, and Zero-copy mode. +For detailed setup and testing information, refer to :ref:`kernel_xdp`. diff --git a/source/linux/Foundational_Components/Kernel/Kernel_Drivers/Network/XDP.rst b/source/linux/Foundational_Components/Kernel/Kernel_Drivers/Network/XDP.rst new file mode 100644 index 000000000..d4f3859c7 --- /dev/null +++ b/source/linux/Foundational_Components/Kernel/Kernel_Drivers/Network/XDP.rst @@ -0,0 +1,237 @@ +.. _kernel_xdp: + +=== +XDP +=== + +.. contents:: :local: + :depth: 3 + +Introduction +============ + +eXpress Data Path (XDP) provides a framework for extended Berkeley Packet Filters (eBPF) that enables high-performance programmable packet processing in the Linux kernel. It runs the eBPF program at the earliest possible point in software, namely at the moment the network driver receives the packet. + +XDP allows running an eBPF program just before the socket buffers (skbs) are allocated in the driver. The eBPF program can examine the packet and return one of the following actions. + +- ``XDP_DROP`` :- The packet is dropped right away, without wasting any resources. Useful for firewall etc. +- ``XDP_ABORTED`` :- Similar to drop, an exception is generated. +- ``XDP_PASS`` :- Pass the packet to kernel stack, i.e. the skbs are allocated and it works normally. +- ``XDP_TX`` :- Send the packet back to same NIC with modification(if done by the program). +- ``XDP_REDIRECT`` :- Send the packet to another NIC or to the user space through AF_XDP Socket(discussed below). + +.. image:: /images/XDP-packet-processing.png + +As explained before, the ``XDP_REDIRECT`` sends packets directly to the user space. +This works by using the AF_XDP socket type which was introduced specifically for this usecase. + +In this process, the packet is directly sent to the user space without going through the kernel network stack. + +.. image:: /images/xdp-packet.png + +Use cases for XDP +----------------- + +XDP is particularly useful for these common networking scenarios: + +1. **DDoS Mitigation**: High-speed packet filtering and dropping malicious traffic +2. **Load Balancing**: Efficient traffic distribution across multiple servers +3. **Packet Capture**: High-performance network monitoring without performance penalties +4. **Firewalls**: Wire-speed packet filtering based on flexible rule sets +5. **Network Analytics**: Real-time traffic analysis and monitoring +6. **Custom Network Functions**: Specialized packet handling for unique requirements + +How to run XDP on EVM +--------------------- + +The kernel configuration requires the following changes to use XDP: + +.. code-block:: console + + CONFIG_DEBUG_INFO_BTF=y + CONFIG_BPF_PRELOAD=y + CONFIG_BPF_PRELOAD_UMD=y + CONFIG_BPF_EVENTS=y + CONFIG_BPF_LSM=y + CONFIG_DEBUG_INFO_REDUCED=n + CONFIG_FTRACE=y + CONFIG_XDP_SOCKETS=y + +Tools for debugging XDP Applications +------------------------------------- + +Debugging tools for XDP development: + +- bpftool - For loading and managing BPF programs +- xdpdump - For capturing XDP packet data +- perf - For performance monitoring and analysis +- bpftrace - For tracing BPF program execution + +AF_XDP Sockets +============== + +AF_XDP is a socket address family specifically designed to work with the XDP framework. +These sockets provide a high-performance interface for user space applications to receive +and transmit network packets directly from the XDP layer, bypassing the traditional kernel networking stack. + +Key characteristics of AF_XDP sockets include: + +- Direct path from network driver to user space applications +- Shared memory rings for efficient packet transfer +- Minimal overhead compared to traditional socket interfaces +- Optimized for high-throughput, low-latency applications + +How AF_XDP Works +---------------- + +AF_XDP sockets operate through a shared memory mechanism: + +1. XDP program intercepts packets at driver level +2. ``XDP_REDIRECT`` action sends packets to the socket +3. Shared memory rings (``RX``/``TX``/``FILL``/``COMPLETION``) manage packet data +4. Userspace application directly accesses the packet data +5. Zero or minimal copying depending on the mode used + +The AF_XDP architecture uses several ring buffers: + +- **RX Ring**: Received packets ready for consumption +- **TX Ring**: Packets to be transmitted +- **FILL Ring**: Pre-allocated buffers for incoming packets +- **COMPLETION Ring**: Tracks completed ``TX`` operations + +For more details on AF_XDP please refer to the official documentation: `AF_XDP `_. + +XDP Zero-Copy +============= + +Introduction to Zero-Copy Mode +------------------------------- + +Zero-copy mode is an optimization in AF_XDP that eliminates packet data copying between the kernel and user space. This results in significantly improved performance for high-throughput network applications. + +How Zero-Copy Works +------------------- + +In standard XDP operation (copy mode), packet data is copied from kernel memory to user space memory when processed. Zero-copy mode eliminates this copy operation by: + +1. Using memory-mapped regions shared between the kernel and user space +2. Allowing direct DMA from network hardware into memory accessible by user space applications +3. Managing memory ownership through descriptor rings rather than data movement + +This approach provides several benefits: + +- Reduced CPU utilization +- Lower memory bandwidth consumption +- Decreased latency for packet processing +- Improved overall throughput + +Performance Considerations +-------------------------- + +When implementing XDP applications, consider these performance factors: + +1. **Memory Alignment**: Buffers should be aligned to page boundaries for optimal performance +2. **Batch Processing**: Process multiple packets in batches when possible +3. **Poll Mode**: Use poll() or similar mechanisms to avoid blocking on socket operations +4. **Core Affinity**: Bind application threads to specific CPU cores to reduce cache contention + +Testing XDP on EVM +================== + +The `xdp-tools `__ package provides +utility tools for testing XDP and AF_XDP such as `xdp-bench`, `xdp-trafficgen` etc. + +TI SDK packages the latest version of ``xdp-tools`` utilities and provides it as part of the SDK. +This allows users to easily test XDP functionality on EVM using these tools. + +Both CPSW and ICSSG Ethernet drivers supports Native XDP, Generic XDP, and Zero-copy mode. + +.. note:: + + In case of testing with CPSW please note that when running XDP in Zero-copy mode, non-XDP traffic will be dropped. + +**XDP Transmit test** — generate traffic using XDP (copy mode): + +.. code-block:: console + + xdp-trafficgen udp -m ff:ff:ff:ff:ff:ff + +**XDP Drop test** — receive and drop packets using XDP (copy mode): + +.. code-block:: console + + xdp-bench xdp-bench drop + +**XDP Pass test** — receive and pass packets through XDP allowing normal network stack processing (copy mode): + +.. code-block:: console + + xdp-bench xdp-bench pass + +**XDP TX test** — Hairpins (bounces back) received packets on the same interface (copy mode): + +.. code-block:: console + + xdp-bench xdp-bench tx + +**XDP Redirect test** — Redirects received packets on the from one interface to another (copy mode): + +.. code-block:: console + + xdp-bench xdp-bench redirect + +**XSK Drop test** — receive and drop packets using AF_XDP socket in zero-copy mode: + +.. code-block:: console + + xdp-bench xsk-drop -q 0 -C zero-copy + +**XSK Transmit test** — generate traffic using AF_XDP socket in zero-copy mode: + +.. code-block:: console + + xdp-trafficgen xsk-udp -m ff:ff:ff:ff:ff:ff -q 0 -C zero-copy + +While xdpsock is not packaged into the SDK, the same functionality can be done with xsk-trafficgen and xsk-bench from the xdp-tools package. +For more details on xdpsock and how it performs XDP zero copy testing refer to `xdpsock `_ + +Performance Comparison +---------------------- + +Performance testing shows that zero-copy mode can provide substantial throughput improvements compared to copy mode: + +AF_XDP performance while using 64 byte packets for ICSSG (in Kpps): + +.. list-table:: + :header-rows: 1 + + * - Benchmark + - XDP-SKB + - XDP-Native + - XDP-Native(ZeroCopy) + * - rxdrop + - 253 + - 473 + - 656 + * - txonly + - 350 + - 354 + - 855 + +AF_XDP performance while using 64 byte packets for CPSW (in Kpps): + +.. list-table:: + :header-rows: 1 + + * - Benchmark + - XDP-SKB + - XDP-Native + - XDP-Native(ZeroCopy) + * - rxdrop + - 322 + - 491 + - 845 + * - txonly + - 390 + - 394 + - 723 diff --git a/source/linux/Foundational_Components/PRU-ICSS/Linux_Drivers/PRU_ICSSG_Ethernet.rst b/source/linux/Foundational_Components/PRU-ICSS/Linux_Drivers/PRU_ICSSG_Ethernet.rst index 1734d6569..b9e9f01a5 100644 --- a/source/linux/Foundational_Components/PRU-ICSS/Linux_Drivers/PRU_ICSSG_Ethernet.rst +++ b/source/linux/Foundational_Components/PRU-ICSS/Linux_Drivers/PRU_ICSSG_Ethernet.rst @@ -44,7 +44,7 @@ Features supported - Different MII modes for Real-Time Ethernet ports (MII_G_RT1 and MII_G_RT2) on different PRU_ICSSG instances. For example, MII on a PRU_ICSSG1 port, and RGMII on a PRU_ICSSG2 port, is supported. - IRQ Coalescing also known as interrupt pacing. - Multi-cast HW filtering -- XDP Native Mode and XDP Generic Mode +- XDP Native Mode, XDP Generic Mode and Zero-copy mode - Cut Through forwarding - PHY Interrupt mode for ICSSG2 - Multicast filtering support for VLAN interfaces @@ -54,7 +54,6 @@ Features supported - VLAN HW filtering - All-multi mode is always enabled - Different MII modes for Real-Time Ethernet ports (MII_G_RT1 and MII_G_RT2) on a single PRU_ICSSG instance. For example, MII_G_RT1=MII and MII_G_RT2=RGMII. -- XDP with Zero-copy mode Driver Configuration #################### @@ -713,8 +712,8 @@ To turn off PPS, XDP ### -The PRU_ICSSG Ethernet driver supports Native XDP as well as Generic XDP. XDP with Zero-copy mode is not supported yet. -For detailed setup and how to test XDP please refer to :ref:`pru_icssg_xdp`. +The PRU_ICSSG Ethernet driver supports Native XDP, Generic XDP, and Zero-copy mode. +Refer to :ref:`kernel_xdp` for more details. Tips diff --git a/source/linux/Foundational_Components/PRU-ICSS/Linux_Drivers/PRU_ICSSG_XDP.rst b/source/linux/Foundational_Components/PRU-ICSS/Linux_Drivers/PRU_ICSSG_XDP.rst index 71f7be4da..6d39a0c55 100644 --- a/source/linux/Foundational_Components/PRU-ICSS/Linux_Drivers/PRU_ICSSG_XDP.rst +++ b/source/linux/Foundational_Components/PRU-ICSS/Linux_Drivers/PRU_ICSSG_XDP.rst @@ -4,182 +4,7 @@ PRU_ICSSG XDP ############# -.. contents:: :local: - :depth: 3 +.. note:: -************ -Introduction -************ - -XDP (eXpress Data Path) provides a framework for BPF that enables high-performance programmable packet processing in the Linux kernel. It runs the BPF program at the earliest possible point in software, namely at the moment the network driver receives the packet. - -XDP allows running a BPF program just before the skbs are allocated in the driver, the BPF program can look at the packet and return the following things. - -- XDP_DROP :- The packet is dropped right away, without wasting any resources. Useful for firewall etc. -- XDP_ABORTED :- Similar to drop, an exception is generated. -- XDP_PASS :- Pass the packet to kernel stack, i.e. the skbs are allocated and it works normally. -- XDP_TX :- Send the packet back to same NIC with modification(if done by the program). -- XDP_REDIRECT :- Send the packet to another NIC or to the user space through AF_XDP Socket(discussed below). - -.. Image:: /images/XDP-packet-processing.png - -As explained before, the XDP_REDIRECT sends packets directly to the user space. -This works by using the AF_XDP socket type which was introduced specifically for this usecase. - -In this process, the packet is directly sent to the user space without going through the kernel network stack. - -.. Image:: /images/xdp-packet.png - -Use cases for XDP -================= - -XDP is particularly useful for these common networking scenarios: - -1. **DDoS Mitigation**: High-speed packet filtering and dropping malicious traffic -2. **Load Balancing**: Efficient traffic distribution across multiple servers -3. **Packet Capture**: High-performance network monitoring without performance penalties -4. **Firewalls**: Wire-speed packet filtering based on flexible rule sets -5. **Network Analytics**: Real-time traffic analysis and monitoring -6. **Custom Network Functions**: Specialized packet handling for unique requirements - -How to run XDP with PRU_ICSSG -============================= - -The kernel configuration requires the following changes to use XDP with PRU_ICSSG: - -.. code-block:: console - - CONFIG_DEBUG_INFO_BTF=y - CONFIG_BPF_PRELOAD=y - CONFIG_BPF_PRELOAD_UMD=y - CONFIG_BPF_EVENTS=y - CONFIG_BPF_LSM=y - CONFIG_DEBUG_INFO_REDUCED=n - CONFIG_FTRACE=y - CONFIG_XDP_SOCKETS=y - -Tools for debugging XDP Applications -==================================== - -Debugging tools for XDP development: - -- bpftool - For loading and managing BPF programs -- xdpdump - For capturing XDP packet data -- perf - For performance monitoring and analysis -- bpftrace - For tracing BPF program execution - -************** -AF_XDP Sockets -************** - -AF_XDP is a socket address family specifically designed to work with the XDP framework. -These sockets provide a high-performance interface for user space applications to receive -and transmit network packets directly from the XDP layer, bypassing the traditional kernel networking stack. - -Key characteristics of AF_XDP sockets include: - -- Direct path from network driver to user space applications -- Shared memory rings for efficient packet transfer -- Minimal overhead compared to traditional socket interfaces -- Optimized for high-throughput, low-latency applications - -How AF_XDP Works -================ - -AF_XDP sockets operate through a shared memory mechanism: - -1. XDP program intercepts packets at driver level -2. XDP_REDIRECT action sends packets to the socket -3. Shared memory rings (RX/TX/FILL/COMPLETION) manage packet data -4. Userspace application directly accesses the packet data -5. Zero or minimal copying depending on the mode used - -The AF_XDP architecture uses several ring buffers: - -- **RX Ring**: Received packets ready for consumption -- **TX Ring**: Packets to be transmitted -- **FILL Ring**: Pre-allocated buffers for incoming packets -- **COMPLETION Ring**: Tracks completed TX operations - -For more details on AF_XDP please refer to the official documentation: `AF_XDP `_. - -Current Support Status in PRU_ICSSG -=================================== - -The PRU_ICSSG Ethernet driver currently supports: - -- Native XDP mode -- Generic XDP mode (SKB-based) -- Zero-copy mode - -************************** -XDP Zero-Copy in PRU_ICSSG -************************** - -Introduction to Zero-Copy Mode -============================== - -Zero-copy mode is an optimization in AF_XDP that eliminates packet data copying between the kernel and user space. This results in significantly improved performance for high-throughput network applications. - -How Zero-Copy Works -=================== - -In standard XDP operation (copy mode), packet data is copied from kernel memory to user space memory when processed. Zero-copy mode eliminates this copy operation by: - -1. Using memory-mapped regions shared between the kernel and user space -2. Allowing direct DMA from network hardware into memory accessible by user space applications -3. Managing memory ownership through descriptor rings rather than data movement - -This approach provides several benefits: -- Reduced CPU utilization -- Lower memory bandwidth consumption -- Decreased latency for packet processing -- Improved overall throughput - -Requirements for Zero-Copy -========================== - -For zero-copy to function properly with PRU_ICSSG, ensure: - -1. **Driver Support**: Verify the PRU_ICSSG driver is loaded with zero-copy support enabled -2. **Memory Alignment**: Buffer addresses must be properly aligned to page boundaries -3. **UMEM Configuration**: The UMEM area must be correctly configured: - - Properly aligned memory allocation - - Sufficient number of packet buffers - - Appropriate buffer sizes -4. **Hugepages**: Using hugepages for UMEM allocation is recommended for optimal performance - -Performance Comparison -====================== - -Performance testing shows that zero-copy mode can provide substantial throughput improvements compared to copy mode: - -`xdpsock `_ opensource tool was used for testing XDP zero copy. -AF_XDP performance while using 64 byte packets in Kpps: - -.. list-table:: - :header-rows: 1 - - * - Benchmark - - XDP-SKB - - XDP-Native - - XDP-Native(ZeroCopy) - * - rxdrop - - 253 - - 473 - - 656 - * - txonly - - 350 - - 354 - - 855 - -Performance Considerations -========================== - -When implementing XDP applications, consider these performance factors: - -1. **Memory Alignment**: Buffers should be aligned to page boundaries for optimal performance -2. **Batch Processing**: Process multiple packets in batches when possible -3. **Poll Mode**: Use poll() or similar mechanisms to avoid blocking on socket operations -4. **Core Affinity**: Bind application threads to specific CPU cores to reduce cache contention -5. **NUMA Awareness**: Consider NUMA topology when allocating memory for packet buffers + The XDP documentation has been consolidated. Please refer to + :ref:`kernel_xdp` for CPSW and ICSSG XDP setup and testing. diff --git a/source/linux/Foundational_Components_Kernel_Drivers.rst b/source/linux/Foundational_Components_Kernel_Drivers.rst index a742b2c58..206691778 100644 --- a/source/linux/Foundational_Components_Kernel_Drivers.rst +++ b/source/linux/Foundational_Components_Kernel_Drivers.rst @@ -32,6 +32,7 @@ Kernel Drivers Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-Ethernet Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW2g Foundational_Components/Kernel/Kernel_Drivers/Network/NETCONF-YANG + Foundational_Components/Kernel/Kernel_Drivers/Network/XDP Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_End_Point Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_Backplane Foundational_Components/Kernel/Kernel_Drivers/PCIe/PCIe_Root_Complex