Skip to content

Commit 97ef419

Browse files
Peter Zijlstrats1506
authored andcommitted
perf: Fix perf_lock_task_context() vs RCU
commit 058ebd0 upstream. Jiri managed to trigger this warning: [] ====================================================== [] [ INFO: possible circular locking dependency detected ] [] 3.10.0+ #228 Tainted: G W [] ------------------------------------------------------- [] p/6613 is trying to acquire lock: [] (rcu_node_0){..-...}, at: [<ffffffff810ca797>] rcu_read_unlock_special+0xa7/0x250 [] [] but task is already holding lock: [] (&ctx->lock){-.-...}, at: [<ffffffff810f2879>] perf_lock_task_context+0xd9/0x2c0 [] [] which lock already depends on the new lock. [] [] the existing dependency chain (in reverse order) is: [] [] -> stratosk#4 (&ctx->lock){-.-...}: [] -> stratosk#3 (&rq->lock){-.-.-.}: [] -> stratosk#2 (&p->pi_lock){-.-.-.}: [] -> stratosk#1 (&rnp->nocb_gp_wq[1]){......}: [] -> #0 (rcu_node_0){..-...}: Paul was quick to explain that due to preemptible RCU we cannot call rcu_read_unlock() while holding scheduler (or nested) locks when part of the read side critical section was preemptible. Therefore solve it by making the entire RCU read side non-preemptible. Also pull out the retry from under the non-preempt to play nice with RT. Reported-by: Jiri Olsa <jolsa@redhat.com> Helped-out-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 170f6d0 commit 97ef419

1 file changed

Lines changed: 14 additions & 1 deletion

File tree

kernel/events/core.c

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -651,8 +651,18 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
651651
{
652652
struct perf_event_context *ctx;
653653

654-
rcu_read_lock();
655654
retry:
655+
/*
656+
* One of the few rules of preemptible RCU is that one cannot do
657+
* rcu_read_unlock() while holding a scheduler (or nested) lock when
658+
* part of the read side critical section was preemptible -- see
659+
* rcu_read_unlock_special().
660+
*
661+
* Since ctx->lock nests under rq->lock we must ensure the entire read
662+
* side critical section is non-preemptible.
663+
*/
664+
preempt_disable();
665+
rcu_read_lock();
656666
ctx = rcu_dereference(task->perf_event_ctxp[ctxn]);
657667
if (ctx) {
658668
/*
@@ -668,6 +678,8 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
668678
raw_spin_lock_irqsave(&ctx->lock, *flags);
669679
if (ctx != rcu_dereference(task->perf_event_ctxp[ctxn])) {
670680
raw_spin_unlock_irqrestore(&ctx->lock, *flags);
681+
rcu_read_unlock();
682+
preempt_enable();
671683
goto retry;
672684
}
673685

@@ -677,6 +689,7 @@ perf_lock_task_context(struct task_struct *task, int ctxn, unsigned long *flags)
677689
}
678690
}
679691
rcu_read_unlock();
692+
preempt_enable();
680693
return ctx;
681694
}
682695

0 commit comments

Comments
 (0)