)]}' { "commit": "02373d7c69b4270bbab930f8a81b0721be794347", "tree": "4cfe2d1f1621c42bb2ec89eee199fdd1326681eb", "parents": [ "cf736ea6f902c26e03895dc7f5ccbc55cdc68e6e" ], "author": { "name": "Russell King", "email": "rmk+kernel@arm.linux.org.uk", "time": "Wed Aug 12 15:22:16 2015 +0530" }, "committer": { "name": "Eduardo Valentin", "email": "edubezval@gmail.com", "time": "Fri Aug 14 18:21:41 2015 -0700" }, "message": "thermal: cpu_cooling: fix lockdep problems in cpu_cooling\n\nA recent change to the cpu_cooling code introduced a AB-BA deadlock\nscenario between the cpufreq_policy_notifier_list rwsem and the\ncooling_cpufreq_lock. This is caused by cooling_cpufreq_lock being held\nbefore the registration/removal of the notifier block (an operation\nwhich takes the rwsem), and the notifier code itself which takes the\nlocks in the reverse order:\n\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n[ INFO: possible circular locking dependency detected ]\n3.18.0+ #1453 Not tainted\n-------------------------------------------------------\nrc.local/770 is trying to acquire lock:\n (cooling_cpufreq_lock){+.+.+.}, at: [\u003cc04abfc4\u003e] cpufreq_thermal_notifier+0x34/0xfc\n\nbut task is already holding lock:\n ((cpufreq_policy_notifier_list).rwsem){++++.+}, at: [\u003cc0042f04\u003e] __blocking_notifier_call_chain+0x34/0x68\n\nwhich lock already depends on the new lock.\n\nthe existing dependency chain (in reverse order) is:\n\n-\u003e #1 ((cpufreq_policy_notifier_list).rwsem){++++.+}:\n [\u003cc06bc3b0\u003e] down_write+0x44/0x9c\n [\u003cc0043444\u003e] blocking_notifier_chain_register+0x28/0xd8\n [\u003cc04ad610\u003e] cpufreq_register_notifier+0x68/0x90\n [\u003cc04abe4c\u003e] __cpufreq_cooling_register.part.1+0x120/0x180\n [\u003cc04abf44\u003e] __cpufreq_cooling_register+0x98/0xa4\n [\u003cc04abf8c\u003e] cpufreq_cooling_register+0x18/0x1c\n [\u003cbf0046f8\u003e] imx_thermal_probe+0x1c0/0x470 [imx_thermal]\n [\u003cc037cef8\u003e] platform_drv_probe+0x50/0xac\n [\u003cc037b710\u003e] driver_probe_device+0x114/0x234\n [\u003cc037b8cc\u003e] __driver_attach+0x9c/0xa0\n [\u003cc0379d68\u003e] bus_for_each_dev+0x5c/0x90\n [\u003cc037b204\u003e] driver_attach+0x24/0x28\n [\u003cc037ae7c\u003e] bus_add_driver+0xe0/0x1d8\n [\u003cc037c0cc\u003e] driver_register+0x80/0xfc\n [\u003cc037cd80\u003e] __platform_driver_register+0x50/0x64\n [\u003cbf007018\u003e] 0xbf007018\n [\u003cc0008a5c\u003e] do_one_initcall+0x88/0x1d8\n [\u003cc0095da4\u003e] load_module+0x1768/0x1ef8\n [\u003cc0096614\u003e] SyS_init_module+0xe0/0xf4\n [\u003cc000ec00\u003e] ret_fast_syscall+0x0/0x48\n\n-\u003e #0 (cooling_cpufreq_lock){+.+.+.}:\n [\u003cc00619f8\u003e] lock_acquire+0xb0/0x124\n [\u003cc06ba3b4\u003e] mutex_lock_nested+0x5c/0x3d8\n [\u003cc04abfc4\u003e] cpufreq_thermal_notifier+0x34/0xfc\n [\u003cc0042bf4\u003e] notifier_call_chain+0x4c/0x8c\n [\u003cc0042f20\u003e] __blocking_notifier_call_chain+0x50/0x68\n [\u003cc0042f58\u003e] blocking_notifier_call_chain+0x20/0x28\n [\u003cc04ae62c\u003e] cpufreq_set_policy+0x7c/0x1d0\n [\u003cc04af3cc\u003e] store_scaling_governor+0x74/0x9c\n [\u003cc04ad418\u003e] store+0x90/0xc0\n [\u003cc0175384\u003e] sysfs_kf_write+0x54/0x58\n [\u003cc01746b4\u003e] kernfs_fop_write+0xdc/0x190\n [\u003cc010dcc0\u003e] vfs_write+0xac/0x1b4\n [\u003cc010dfec\u003e] SyS_write+0x44/0x90\n [\u003cc000ec00\u003e] ret_fast_syscall+0x0/0x48\n\nother info that might help us debug this:\n\n Possible unsafe locking scenario:\n\n CPU0 CPU1\n ---- ----\n lock((cpufreq_policy_notifier_list).rwsem);\n lock(cooling_cpufreq_lock);\n lock((cpufreq_policy_notifier_list).rwsem);\n lock(cooling_cpufreq_lock);\n\n *** DEADLOCK ***\n\n7 locks held by rc.local/770:\n #0: (sb_writers#6){.+.+.+}, at: [\u003cc010dda0\u003e] vfs_write+0x18c/0x1b4\n #1: (\u0026of-\u003emutex){+.+.+.}, at: [\u003cc0174678\u003e] kernfs_fop_write+0xa0/0x190\n #2: (s_active#52){.+.+.+}, at: [\u003cc0174680\u003e] kernfs_fop_write+0xa8/0x190\n #3: (cpu_hotplug.lock){++++++}, at: [\u003cc0026a60\u003e] get_online_cpus+0x34/0x90\n #4: (cpufreq_rwsem){.+.+.+}, at: [\u003cc04ad3e0\u003e] store+0x58/0xc0\n #5: (\u0026policy-\u003erwsem){+.+.+.}, at: [\u003cc04ad3f8\u003e] store+0x70/0xc0\n #6: ((cpufreq_policy_notifier_list).rwsem){++++.+}, at: [\u003cc0042f04\u003e] __blocking_notifier_call_chain+0x34/0x68\n\nstack backtrace:\nCPU: 0 PID: 770 Comm: rc.local Not tainted 3.18.0+ #1453\nHardware name: Freescale i.MX6 Quad/DualLite (Device Tree)\nBacktrace:\n[\u003cc00121c8\u003e] (dump_backtrace) from [\u003cc0012360\u003e] (show_stack+0x18/0x1c)\n r6:c0b85a80 r5:c0b75630 r4:00000000 r3:00000000\n[\u003cc0012348\u003e] (show_stack) from [\u003cc06b6c48\u003e] (dump_stack+0x7c/0x98)\n[\u003cc06b6bcc\u003e] (dump_stack) from [\u003cc06b42a4\u003e] (print_circular_bug+0x28c/0x2d8)\n r4:c0b85a80 r3:d0071d40\n[\u003cc06b4018\u003e] (print_circular_bug) from [\u003cc00613b0\u003e] (__lock_acquire+0x1acc/0x1bb0)\n r10:c0b50660 r8:c09e6d80 r7:d0071d40 r6:c11d0f0c r5:00000007 r4:d0072240\n[\u003cc005f8e4\u003e] (__lock_acquire) from [\u003cc00619f8\u003e] (lock_acquire+0xb0/0x124)\n r10:00000000 r9:c04abfc4 r8:00000000 r7:00000000 r6:00000000 r5:c0a06f0c\n r4:00000000\n[\u003cc0061948\u003e] (lock_acquire) from [\u003cc06ba3b4\u003e] (mutex_lock_nested+0x5c/0x3d8)\n r10:ec853800 r9:c0a06ed4 r8:d0071d40 r7:c0a06ed4 r6:c11d0f0c r5:00000000\n r4:c04abfc4\n[\u003cc06ba358\u003e] (mutex_lock_nested) from [\u003cc04abfc4\u003e] (cpufreq_thermal_notifier+0x34/0xfc)\n r10:ec853800 r9:ec85380c r8:d00d7d3c r7:c0a06ed4 r6:d00d7d3c r5:00000000\n r4:fffffffe\n[\u003cc04abf90\u003e] (cpufreq_thermal_notifier) from [\u003cc0042bf4\u003e] (notifier_call_chain+0x4c/0x8c)\n r7:00000000 r6:00000000 r5:00000000 r4:fffffffe\n[\u003cc0042ba8\u003e] (notifier_call_chain) from [\u003cc0042f20\u003e] (__blocking_notifier_call_chain+0x50/0x68)\n r8:c0a072a4 r7:00000000 r6:d00d7d3c r5:ffffffff r4:c0a06fc8 r3:ffffffff\n[\u003cc0042ed0\u003e] (__blocking_notifier_call_chain) from [\u003cc0042f58\u003e] (blocking_notifier_call_chain+0x20/0x28)\n r7:ec98b540 r6:c13ebc80 r5:ed76e600 r4:d00d7d3c\n[\u003cc0042f38\u003e] (blocking_notifier_call_chain) from [\u003cc04ae62c\u003e] (cpufreq_set_policy+0x7c/0x1d0)\n[\u003cc04ae5b0\u003e] (cpufreq_set_policy) from [\u003cc04af3cc\u003e] (store_scaling_governor+0x74/0x9c)\n r7:ec98b540 r6:0000000c r5:ec98b540 r4:ed76e600\n[\u003cc04af358\u003e] (store_scaling_governor) from [\u003cc04ad418\u003e] (store+0x90/0xc0)\n r6:0000000c r5:ed76e6d4 r4:ed76e600\n[\u003cc04ad388\u003e] (store) from [\u003cc0175384\u003e] (sysfs_kf_write+0x54/0x58)\n r8:0000000c r7:d00d7f78 r6:ec98b540 r5:0000000c r4:ec853800 r3:0000000c\n[\u003cc0175330\u003e] (sysfs_kf_write) from [\u003cc01746b4\u003e] (kernfs_fop_write+0xdc/0x190)\n r6:ec98b540 r5:00000000 r4:00000000 r3:c0175330\n[\u003cc01745d8\u003e] (kernfs_fop_write) from [\u003cc010dcc0\u003e] (vfs_write+0xac/0x1b4)\n r10:0162aa70 r9:d00d6000 r8:0000000c r7:d00d7f78 r6:0162aa70 r5:0000000c\n r4:eccde500\n[\u003cc010dc14\u003e] (vfs_write) from [\u003cc010dfec\u003e] (SyS_write+0x44/0x90)\n r10:0162aa70 r8:0000000c r7:eccde500 r6:eccde500 r5:00000000 r4:00000000\n[\u003cc010dfa8\u003e] (SyS_write) from [\u003cc000ec00\u003e] (ret_fast_syscall+0x0/0x48)\n r10:00000000 r8:c000edc4 r7:00000004 r6:000216cc r5:0000000c r4:0162aa70\n\nSolve this by moving to finer grained locking - use one mutex to protect\nthe cpufreq_dev_list as a whole, and a separate lock to ensure correct\nordering of cpufreq notifier registration and removal.\n\ncooling_list_lock is taken within cooling_cpufreq_lock on\n(un)registration to preserve the behavior of the code, i.e. to\natomically add/remove to the list and (un)register the notifier.\n\nFixes: 2dcd851fe4b4 (\"thermal: cpu_cooling: Update always cpufreq policy with\nReviewed-by: Viresh Kumar \u003cviresh.kumar@linaro.org\u003e\nSigned-off-by: Russell King \u003crmk+kernel@arm.linux.org.uk\u003e\nSigned-off-by: Viresh Kumar \u003cviresh.kumar@linaro.org\u003e\nSigned-off-by: Eduardo Valentin \u003cedubezval@gmail.com\u003e\n", "tree_diff": [ { "type": "modify", "old_id": "6509c61b96484993333a4198138945f43154fdfd", "old_mode": 33188, "old_path": "drivers/thermal/cpu_cooling.c", "new_id": "5ae0524bed19a55bdc89157dc8a035fcf349f6fb", "new_mode": 33188, "new_path": "drivers/thermal/cpu_cooling.c" } ] }