| From 52edbddd9da591ac5a92809145551c21fc08ee20 Mon Sep 17 00:00:00 2001 |
| From: Sasha Levin <sashal@kernel.org> |
| Date: Sat, 19 Jun 2021 01:01:09 +0800 |
| Subject: lockding/lockdep: Avoid to find wrong lock dep path in |
| check_irq_usage() |
| |
| From: Boqun Feng <boqun.feng@gmail.com> |
| |
| [ Upstream commit 7b1f8c6179769af6ffa055e1169610b51d71edd5 ] |
| |
| In the step #3 of check_irq_usage(), we seach backwards to find a lock |
| whose usage conflicts the usage of @target_entry1 on safe/unsafe. |
| However, we should only keep the irq-unsafe usage of @target_entry1 into |
| consideration, because it could be a case where a lock is hardirq-unsafe |
| but soft-safe, and in check_irq_usage() we find it because its |
| hardirq-unsafe could result into a hardirq-safe-unsafe deadlock, but |
| currently since we don't filter out the other usage bits, so we may find |
| a lock dependency path softirq-unsafe -> softirq-safe, which in fact |
| doesn't cause a deadlock. And this may cause misleading lockdep splats. |
| |
| Fix this by only keeping LOCKF_ENABLED_IRQ_ALL bits when we try the |
| backwards search. |
| |
| Reported-by: Johannes Berg <johannes@sipsolutions.net> |
| Signed-off-by: Boqun Feng <boqun.feng@gmail.com> |
| Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> |
| Link: https://lore.kernel.org/r/20210618170110.3699115-4-boqun.feng@gmail.com |
| Signed-off-by: Sasha Levin <sashal@kernel.org> |
| --- |
| kernel/locking/lockdep.c | 12 +++++++++++- |
| 1 file changed, 11 insertions(+), 1 deletion(-) |
| |
| diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c |
| index 78b51b8ad4f6..788629c06ce9 100644 |
| --- a/kernel/locking/lockdep.c |
| +++ b/kernel/locking/lockdep.c |
| @@ -2764,8 +2764,18 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev, |
| * Step 3: we found a bad match! Now retrieve a lock from the backward |
| * list whose usage mask matches the exclusive usage mask from the |
| * lock found on the forward list. |
| + * |
| + * Note, we should only keep the LOCKF_ENABLED_IRQ_ALL bits, considering |
| + * the follow case: |
| + * |
| + * When trying to add A -> B to the graph, we find that there is a |
| + * hardirq-safe L, that L -> ... -> A, and another hardirq-unsafe M, |
| + * that B -> ... -> M. However M is **softirq-safe**, if we use exact |
| + * invert bits of M's usage_mask, we will find another lock N that is |
| + * **softirq-unsafe** and N -> ... -> A, however N -> .. -> M will not |
| + * cause a inversion deadlock. |
| */ |
| - backward_mask = original_mask(target_entry1->class->usage_mask); |
| + backward_mask = original_mask(target_entry1->class->usage_mask & LOCKF_ENABLED_IRQ_ALL); |
| |
| ret = find_usage_backwards(&this, backward_mask, &target_entry); |
| if (bfs_error(ret)) { |
| -- |
| 2.30.2 |
| |