| From 09cfae9f13d51700b0fecf591dcd658fc5375428 Mon Sep 17 00:00:00 2001 |
| From: Markus Boehme <markubo@amazon.com> |
| Date: Tue, 20 Jul 2021 16:26:19 -0700 |
| Subject: ixgbe: Fix packet corruption due to missing DMA sync |
| |
| From: Markus Boehme <markubo@amazon.com> |
| |
| commit 09cfae9f13d51700b0fecf591dcd658fc5375428 upstream. |
| |
| When receiving a packet with multiple fragments, hardware may still |
| touch the first fragment until the entire packet has been received. The |
| driver therefore keeps the first fragment mapped for DMA until end of |
| packet has been asserted, and delays its dma_sync call until then. |
| |
| The driver tries to fit multiple receive buffers on one page. When using |
| 3K receive buffers (e.g. using Jumbo frames and legacy-rx is turned |
| off/build_skb is being used) on an architecture with 4K pages, the |
| driver allocates an order 1 compound page and uses one page per receive |
| buffer. To determine the correct offset for a delayed DMA sync of the |
| first fragment of a multi-fragment packet, the driver then cannot just |
| use PAGE_MASK on the DMA address but has to construct a mask based on |
| the actual size of the backing page. |
| |
| Using PAGE_MASK in the 3K RX buffer/4K page architecture configuration |
| will always sync the first page of a compound page. With the SWIOTLB |
| enabled this can lead to corrupted packets (zeroed out first fragment, |
| re-used garbage from another packet) and various consequences, such as |
| slow/stalling data transfers and connection resets. For example, testing |
| on a link with MTU exceeding 3058 bytes on a host with SWIOTLB enabled |
| (e.g. "iommu=soft swiotlb=262144,force") TCP transfers quickly fizzle |
| out without this patch. |
| |
| Cc: stable@vger.kernel.org |
| Fixes: 0c5661ecc5dd7 ("ixgbe: fix crash in build_skb Rx code path") |
| Signed-off-by: Markus Boehme <markubo@amazon.com> |
| Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> |
| Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> |
| Signed-off-by: David S. Miller <davem@davemloft.net> |
| Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
| --- |
| drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 3 ++- |
| 1 file changed, 2 insertions(+), 1 deletion(-) |
| |
| --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |
| +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |
| @@ -1825,7 +1825,8 @@ static void ixgbe_dma_sync_frag(struct i |
| struct sk_buff *skb) |
| { |
| if (ring_uses_build_skb(rx_ring)) { |
| - unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK; |
| + unsigned long mask = (unsigned long)ixgbe_rx_pg_size(rx_ring) - 1; |
| + unsigned long offset = (unsigned long)(skb->data) & mask; |
| |
| dma_sync_single_range_for_cpu(rx_ring->dev, |
| IXGBE_CB(skb)->dma, |