blob: 4a3e7eaece85dbcd3bcd10da465f2548fb1362fb [file] [log] [blame]
From 4c4c55ec5ccc7ef42434a777c156dc1a0dd85000 Mon Sep 17 00:00:00 2001
From: Wolfram Sang <wsa+renesas@sang-engineering.com>
Date: Thu, 16 Mar 2017 11:56:02 +0100
Subject: [PATCH 121/286] mmc: tmio: always unmap DMA before waiting for
interrupt
In the (maybe academical) case, we don't get a DATAEND interrupt after
DMA completed, we will wait endlessly for the completion to complete.
This is not bad per se, since we have a more generic completion tracking
a timeout. In that rare case, however, the DMA buffer will not get
unmapped and we have a leak. Reorder the code, so unmapping will always
take place.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
(cherry picked from commit 5f07ef8f603ace496ca8c20eef446c5ae7a10474)
Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
---
drivers/mmc/host/tmio_mmc_dma.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
--- a/drivers/mmc/host/tmio_mmc_dma.c
+++ b/drivers/mmc/host/tmio_mmc_dma.c
@@ -47,8 +47,6 @@ static void tmio_mmc_dma_callback(void *
{
struct tmio_mmc_host *host = arg;
- wait_for_completion(&host->dma_dataend);
-
spin_lock_irq(&host->lock);
if (!host->data)
@@ -63,6 +61,11 @@ static void tmio_mmc_dma_callback(void *
host->sg_ptr, host->sg_len,
DMA_TO_DEVICE);
+ spin_unlock_irq(&host->lock);
+
+ wait_for_completion(&host->dma_dataend);
+
+ spin_lock_irq(&host->lock);
tmio_mmc_do_data_irq(host);
out:
spin_unlock_irq(&host->lock);