[ANNOUNCE] v4.16.18-rt10

Dear RT folks!

I'm pleased to announce the v4.16.18-rt10 patch set.

Changes since v4.16.18-rt9:

  - The gicv3 now gained a raw_spinlock_t and allocates memory early.
    This avoids splats early during boot.

  - The migrate_disable() replacements for UP and non-RT builds were
    broken. The logic changed for this configuration and now
    migrate_disable() turns on preempt_disable(). Reported by Joe Korty.

  - The trace events in the cgroup code allocate a sleeping lock which
    does not work on RT. Reported by Clark Williams, patch by Steven
    Rostedt.

  - Avoid a preempt_disable() section during fork. This gives
    debug_object() an additional opportunity to refill its pool if
    needed.

  - Drop the "Prevent broadcast signals" patch. Broadcast signals are
    not prevented by preventing sig_kernel_only() and
    sig_kernel_coredump() (as explained by Eric W. Biederman). This change
    was probably required in the early days but I can't find a reason why it
    should be still required.

  - The "delay ioapic unmask during pending setaffinity" patch been
    replaced with an updated version by Thomas Gleixner.

Known issues
     - A warning triggered in "rcu_note_context_switch" originated from
       SyS_timer_gettime(). The issue was always there, it is now
       visible. Reported by Grygorii Strashko and Daniel Wagner.

The delta patch against v4.16.18-rt9 is appended below and can be found here:

     https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.16/incr/patch-4.16.18-rt9-rt10.patch.xz

You can get this release via the git tree at:

    git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.16.18-rt10

The RT patch against v4.16.18 can be found here:

    https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.16/older/patch-4.16.18-rt10.patch.xz

The split quilt queue is available at:

    https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.16/older/patches-4.16.18-rt10.tar.xz

Sebastian

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
diff --git a/patches/Revert-posix-timers-Prevent-broadcast-signals.patch b/patches/Revert-posix-timers-Prevent-broadcast-signals.patch
new file mode 100644
index 0000000..daa4191
--- /dev/null
+++ b/patches/Revert-posix-timers-Prevent-broadcast-signals.patch
@@ -0,0 +1,40 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Wed, 11 Jul 2018 15:37:04 +0200
+Subject: [PATCH] Revert "posix-timers: Prevent broadcast signals"
+
+This reverts commit "posix-timers: Prevent broadcast signals".
+Broadcast signals are not prevented by preventing sig_kernel_only() and
+sig_kernel_coredump() (as explained by Eric W. Biederman).
+This change was probably required in the early days but I can't find a
+reason why it should be still required.
+
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ kernel/time/posix-timers.c | 4 +---
+ 1 file changed, 1 insertion(+), 3 deletions(-)
+
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index e1c5b9b64140..0d14e044b46f 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -434,7 +434,6 @@ static enum hrtimer_restart posix_timer_fn(struct hrtimer *timer)
+ static struct pid *good_sigevent(sigevent_t * event)
+ {
+ 	struct task_struct *rtn = current->group_leader;
+-	int sig = event->sigev_signo;
+ 
+ 	switch (event->sigev_notify) {
+ 	case SIGEV_SIGNAL | SIGEV_THREAD_ID:
+@@ -444,8 +443,7 @@ static struct pid *good_sigevent(sigevent_t * event)
+ 		/* FALLTHRU */
+ 	case SIGEV_SIGNAL:
+ 	case SIGEV_THREAD:
+-		if (sig <= 0 || sig > SIGRTMAX ||
+-		    sig_kernel_only(sig) || sig_kernel_coredump(sig))
++		if (event->sigev_signo <= 0 || event->sigev_signo > SIGRTMAX)
+ 			return NULL;
+ 		/* FALLTHRU */
+ 	case SIGEV_NONE:
+-- 
+2.18.0
+
diff --git a/patches/cgroup-tracing-Move-taking-of-spin-lock-out-of-trace.patch b/patches/cgroup-tracing-Move-taking-of-spin-lock-out-of-trace.patch
new file mode 100644
index 0000000..3b3f72f
--- /dev/null
+++ b/patches/cgroup-tracing-Move-taking-of-spin-lock-out-of-trace.patch
@@ -0,0 +1,270 @@
+From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
+Date: Mon, 9 Jul 2018 17:48:54 -0400
+Subject: [PATCH] cgroup/tracing: Move taking of spin lock out of trace event
+ handlers
+
+[ Upstream commit e4f8d81c738db6d3ffdabfb8329aa2feaa310699 ]
+
+It is unwise to take spin locks from the handlers of trace events.
+Mainly, because they can introduce lockups, because it introduces locks
+in places that are normally not tested. Worse yet, because trace events
+are tucked away in the include/trace/events/ directory, locks that are
+taken there are forgotten about.
+
+As a general rule, I tell people never to take any locks in a trace
+event handler.
+
+Several cgroup trace event handlers call cgroup_path() which eventually
+takes the kernfs_rename_lock spinlock. This injects the spinlock in the
+code without people realizing it. It also can cause issues for the
+PREEMPT_RT patch, as the spinlock becomes a mutex, and the trace event
+handlers are called with preemption disabled.
+
+By moving the calculation of the cgroup_path() out of the trace event
+handlers and into a macro (surrounded by a
+trace_cgroup_##type##_enabled()), then we could place the cgroup_path
+into a string, and pass that to the trace event. Not only does this
+remove the taking of the spinlock out of the trace event handler, but
+it also means that the cgroup_path() only needs to be called once (it
+is currently called twice, once to get the length to reserver the
+buffer for, and once again to get the path itself. Now it only needs to
+be done once.
+
+Reported-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
+Signed-off-by: Tejun Heo <tj@kernel.org>
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ include/trace/events/cgroup.h   |   47 +++++++++++++++++++---------------------
+ kernel/cgroup/cgroup-internal.h |   26 ++++++++++++++++++++++
+ kernel/cgroup/cgroup-v1.c       |    4 +--
+ kernel/cgroup/cgroup.c          |   12 +++++-----
+ 4 files changed, 58 insertions(+), 31 deletions(-)
+
+--- a/include/trace/events/cgroup.h
++++ b/include/trace/events/cgroup.h
+@@ -53,24 +53,22 @@ DEFINE_EVENT(cgroup_root, cgroup_remount
+ 
+ DECLARE_EVENT_CLASS(cgroup,
+ 
+-	TP_PROTO(struct cgroup *cgrp),
++	TP_PROTO(struct cgroup *cgrp, const char *path),
+ 
+-	TP_ARGS(cgrp),
++	TP_ARGS(cgrp, path),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(	int,		root			)
+ 		__field(	int,		id			)
+ 		__field(	int,		level			)
+-		__dynamic_array(char,		path,
+-				cgroup_path(cgrp, NULL, 0) + 1)
++		__string(	path,		path			)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->root = cgrp->root->hierarchy_id;
+ 		__entry->id = cgrp->id;
+ 		__entry->level = cgrp->level;
+-		cgroup_path(cgrp, __get_dynamic_array(path),
+-				  __get_dynamic_array_len(path));
++		__assign_str(path, path);
+ 	),
+ 
+ 	TP_printk("root=%d id=%d level=%d path=%s",
+@@ -79,45 +77,45 @@ DECLARE_EVENT_CLASS(cgroup,
+ 
+ DEFINE_EVENT(cgroup, cgroup_mkdir,
+ 
+-	TP_PROTO(struct cgroup *cgroup),
++	TP_PROTO(struct cgroup *cgrp, const char *path),
+ 
+-	TP_ARGS(cgroup)
++	TP_ARGS(cgrp, path)
+ );
+ 
+ DEFINE_EVENT(cgroup, cgroup_rmdir,
+ 
+-	TP_PROTO(struct cgroup *cgroup),
++	TP_PROTO(struct cgroup *cgrp, const char *path),
+ 
+-	TP_ARGS(cgroup)
++	TP_ARGS(cgrp, path)
+ );
+ 
+ DEFINE_EVENT(cgroup, cgroup_release,
+ 
+-	TP_PROTO(struct cgroup *cgroup),
++	TP_PROTO(struct cgroup *cgrp, const char *path),
+ 
+-	TP_ARGS(cgroup)
++	TP_ARGS(cgrp, path)
+ );
+ 
+ DEFINE_EVENT(cgroup, cgroup_rename,
+ 
+-	TP_PROTO(struct cgroup *cgroup),
++	TP_PROTO(struct cgroup *cgrp, const char *path),
+ 
+-	TP_ARGS(cgroup)
++	TP_ARGS(cgrp, path)
+ );
+ 
+ DECLARE_EVENT_CLASS(cgroup_migrate,
+ 
+-	TP_PROTO(struct cgroup *dst_cgrp, struct task_struct *task, bool threadgroup),
++	TP_PROTO(struct cgroup *dst_cgrp, const char *path,
++		 struct task_struct *task, bool threadgroup),
+ 
+-	TP_ARGS(dst_cgrp, task, threadgroup),
++	TP_ARGS(dst_cgrp, path, task, threadgroup),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(	int,		dst_root		)
+ 		__field(	int,		dst_id			)
+ 		__field(	int,		dst_level		)
+-		__dynamic_array(char,		dst_path,
+-				cgroup_path(dst_cgrp, NULL, 0) + 1)
+ 		__field(	int,		pid			)
++		__string(	dst_path,	path			)
+ 		__string(	comm,		task->comm		)
+ 	),
+ 
+@@ -125,8 +123,7 @@ DECLARE_EVENT_CLASS(cgroup_migrate,
+ 		__entry->dst_root = dst_cgrp->root->hierarchy_id;
+ 		__entry->dst_id = dst_cgrp->id;
+ 		__entry->dst_level = dst_cgrp->level;
+-		cgroup_path(dst_cgrp, __get_dynamic_array(dst_path),
+-				      __get_dynamic_array_len(dst_path));
++		__assign_str(dst_path, path);
+ 		__entry->pid = task->pid;
+ 		__assign_str(comm, task->comm);
+ 	),
+@@ -138,16 +135,18 @@ DECLARE_EVENT_CLASS(cgroup_migrate,
+ 
+ DEFINE_EVENT(cgroup_migrate, cgroup_attach_task,
+ 
+-	TP_PROTO(struct cgroup *dst_cgrp, struct task_struct *task, bool threadgroup),
++	TP_PROTO(struct cgroup *dst_cgrp, const char *path,
++		 struct task_struct *task, bool threadgroup),
+ 
+-	TP_ARGS(dst_cgrp, task, threadgroup)
++	TP_ARGS(dst_cgrp, path, task, threadgroup)
+ );
+ 
+ DEFINE_EVENT(cgroup_migrate, cgroup_transfer_tasks,
+ 
+-	TP_PROTO(struct cgroup *dst_cgrp, struct task_struct *task, bool threadgroup),
++	TP_PROTO(struct cgroup *dst_cgrp, const char *path,
++		 struct task_struct *task, bool threadgroup),
+ 
+-	TP_ARGS(dst_cgrp, task, threadgroup)
++	TP_ARGS(dst_cgrp, path, task, threadgroup)
+ );
+ 
+ #endif /* _TRACE_CGROUP_H */
+--- a/kernel/cgroup/cgroup-internal.h
++++ b/kernel/cgroup/cgroup-internal.h
+@@ -8,6 +8,32 @@
+ #include <linux/list.h>
+ #include <linux/refcount.h>
+ 
++#define TRACE_CGROUP_PATH_LEN 1024
++extern spinlock_t trace_cgroup_path_lock;
++extern char trace_cgroup_path[TRACE_CGROUP_PATH_LEN];
++
++/*
++ * cgroup_path() takes a spin lock. It is good practice not to take
++ * spin locks within trace point handlers, as they are mostly hidden
++ * from normal view. As cgroup_path() can take the kernfs_rename_lock
++ * spin lock, it is best to not call that function from the trace event
++ * handler.
++ *
++ * Note: trace_cgroup_##type##_enabled() is a static branch that will only
++ *       be set when the trace event is enabled.
++ */
++#define TRACE_CGROUP_PATH(type, cgrp, ...)				\
++	do {								\
++		if (trace_cgroup_##type##_enabled()) {			\
++			spin_lock(&trace_cgroup_path_lock);		\
++			cgroup_path(cgrp, trace_cgroup_path,		\
++				    TRACE_CGROUP_PATH_LEN);		\
++			trace_cgroup_##type(cgrp, trace_cgroup_path,	\
++					    ##__VA_ARGS__);		\
++			spin_unlock(&trace_cgroup_path_lock);		\
++		}							\
++	} while (0)
++
+ /*
+  * A cgroup can be associated with multiple css_sets as different tasks may
+  * belong to different cgroups on different hierarchies.  In the other
+--- a/kernel/cgroup/cgroup-v1.c
++++ b/kernel/cgroup/cgroup-v1.c
+@@ -135,7 +135,7 @@ int cgroup_transfer_tasks(struct cgroup
+ 		if (task) {
+ 			ret = cgroup_migrate(task, false, &mgctx);
+ 			if (!ret)
+-				trace_cgroup_transfer_tasks(to, task, false);
++				TRACE_CGROUP_PATH(transfer_tasks, to, task, false);
+ 			put_task_struct(task);
+ 		}
+ 	} while (task && !ret);
+@@ -877,7 +877,7 @@ static int cgroup1_rename(struct kernfs_
+ 
+ 	ret = kernfs_rename(kn, new_parent, new_name_str);
+ 	if (!ret)
+-		trace_cgroup_rename(cgrp);
++		TRACE_CGROUP_PATH(rename, cgrp);
+ 
+ 	mutex_unlock(&cgroup_mutex);
+ 
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -80,6 +80,9 @@ EXPORT_SYMBOL_GPL(cgroup_mutex);
+ EXPORT_SYMBOL_GPL(css_set_lock);
+ #endif
+ 
++DEFINE_SPINLOCK(trace_cgroup_path_lock);
++char trace_cgroup_path[TRACE_CGROUP_PATH_LEN];
++
+ /*
+  * Protects cgroup_idr and css_idr so that IDs can be released without
+  * grabbing cgroup_mutex.
+@@ -2620,7 +2623,7 @@ int cgroup_attach_task(struct cgroup *ds
+ 	cgroup_migrate_finish(&mgctx);
+ 
+ 	if (!ret)
+-		trace_cgroup_attach_task(dst_cgrp, leader, threadgroup);
++		TRACE_CGROUP_PATH(attach_task, dst_cgrp, leader, threadgroup);
+ 
+ 	return ret;
+ }
+@@ -4603,7 +4606,7 @@ static void css_release_work_fn(struct w
+ 		struct cgroup *tcgrp;
+ 
+ 		/* cgroup release path */
+-		trace_cgroup_release(cgrp);
++		TRACE_CGROUP_PATH(release, cgrp);
+ 
+ 		if (cgroup_on_dfl(cgrp))
+ 			cgroup_stat_flush(cgrp);
+@@ -4939,7 +4942,7 @@ int cgroup_mkdir(struct kernfs_node *par
+ 	if (ret)
+ 		goto out_destroy;
+ 
+-	trace_cgroup_mkdir(cgrp);
++	TRACE_CGROUP_PATH(mkdir, cgrp);
+ 
+ 	/* let's create and online css's */
+ 	kernfs_activate(kn);
+@@ -5129,9 +5132,8 @@ int cgroup_rmdir(struct kernfs_node *kn)
+ 		return 0;
+ 
+ 	ret = cgroup_destroy_locked(cgrp);
+-
+ 	if (!ret)
+-		trace_cgroup_rmdir(cgrp);
++		TRACE_CGROUP_PATH(rmdir, cgrp);
+ 
+ 	cgroup_kn_unlock(kn);
+ 	return ret;
diff --git a/patches/drivers-random-reduce-preempt-disabled-region.patch b/patches/drivers-random-reduce-preempt-disabled-region.patch
deleted file mode 100644
index 292520e..0000000
--- a/patches/drivers-random-reduce-preempt-disabled-region.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From: Ingo Molnar <mingo@elte.hu>
-Date: Fri, 3 Jul 2009 08:29:30 -0500
-Subject: drivers: random: Reduce preempt disabled region
-
-No need to keep preemption disabled across the whole function.
-
-Signed-off-by: Ingo Molnar <mingo@elte.hu>
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
----
- drivers/char/random.c |    3 ---
- 1 file changed, 3 deletions(-)
-
---- a/drivers/char/random.c
-+++ b/drivers/char/random.c
-@@ -1122,8 +1122,6 @@ static void add_timer_randomness(struct
- 	} sample;
- 	long delta, delta2, delta3;
- 
--	preempt_disable();
--
- 	sample.jiffies = jiffies;
- 	sample.cycles = random_get_entropy();
- 	sample.num = num;
-@@ -1164,7 +1162,6 @@ static void add_timer_randomness(struct
- 		 */
- 		credit_entropy_bits(r, min_t(int, fls(delta>>1), 11));
- 	}
--	preempt_enable();
- }
- 
- void add_input_randomness(unsigned int type, unsigned int code,
diff --git a/patches/irqchip-gic-v3-its-Make-its_lock-a-raw_spin_lock_t.patch b/patches/irqchip-gic-v3-its-Make-its_lock-a-raw_spin_lock_t.patch
new file mode 100644
index 0000000..dd27732
--- /dev/null
+++ b/patches/irqchip-gic-v3-its-Make-its_lock-a-raw_spin_lock_t.patch
@@ -0,0 +1,61 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Fri, 13 Jul 2018 15:30:52 +0200
+Subject: [PATCH] irqchip/gic-v3-its: Make its_lock a raw_spin_lock_t
+
+The its_lock lock is held while a new device is added to the list and
+during setup while the CPU is booted. Even on -RT the CPU-bootup is
+performed with disabled interrupts.
+
+Make its_lock a raw_spin_lock_t.
+
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ drivers/irqchip/irq-gic-v3-its.c | 10 +++++-----
+ 1 file changed, 5 insertions(+), 5 deletions(-)
+
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 2cbb19cddbf8..952b71bfdbed 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -156,7 +156,7 @@ static struct {
+ } vpe_proxy;
+ 
+ static LIST_HEAD(its_nodes);
+-static DEFINE_SPINLOCK(its_lock);
++static DEFINE_RAW_SPINLOCK(its_lock);
+ static struct rdists *gic_rdists;
+ static struct irq_domain *its_parent;
+ 
+@@ -1943,7 +1943,7 @@ static void its_cpu_init_collection(void)
+ 	struct its_node *its;
+ 	int cpu;
+ 
+-	spin_lock(&its_lock);
++	raw_spin_lock(&its_lock);
+ 	cpu = smp_processor_id();
+ 
+ 	list_for_each_entry(its, &its_nodes, entry) {
+@@ -1985,7 +1985,7 @@ static void its_cpu_init_collection(void)
+ 		its_send_invall(its, &its->collections[cpu]);
+ 	}
+ 
+-	spin_unlock(&its_lock);
++	raw_spin_unlock(&its_lock);
+ }
+ 
+ static struct its_device *its_find_device(struct its_node *its, u32 dev_id)
+@@ -3264,9 +3264,9 @@ static int __init its_probe_one(struct resource *res,
+ 	if (err)
+ 		goto out_free_tables;
+ 
+-	spin_lock(&its_lock);
++	raw_spin_lock(&its_lock);
+ 	list_add(&its->entry, &its_nodes);
+-	spin_unlock(&its_lock);
++	raw_spin_unlock(&its_lock);
+ 
+ 	return 0;
+ 
+-- 
+2.18.0
+
diff --git a/patches/irqchip-gic-v3-its-Move-ITS-pend_page-allocation-int.patch b/patches/irqchip-gic-v3-its-Move-ITS-pend_page-allocation-int.patch
new file mode 100644
index 0000000..07afac8
--- /dev/null
+++ b/patches/irqchip-gic-v3-its-Move-ITS-pend_page-allocation-int.patch
@@ -0,0 +1,133 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Fri, 13 Jul 2018 15:45:36 +0200
+Subject: [PATCH] irqchip/gic-v3-its: Move ITS' ->pend_page allocation into
+ an early CPU up hook
+
+The AP-GIC-starting hook allocates memory for the ->pend_page while the
+CPU is started during boot-up. This callback is invoked on the target
+CPU with disabled interrupts.
+This does not work on -RT beacuse memory allocations are not possible
+with disabled interrupts.
+Move the memory allocation to an earlier hotplug step which invoked with
+enabled interrupts on the boot CPU.
+
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ drivers/irqchip/irq-gic-v3-its.c | 60 ++++++++++++++++++++++----------
+ 1 file changed, 41 insertions(+), 19 deletions(-)
+
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 952b71bfdbed..d9c479dcd4cb 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -167,6 +167,7 @@ static DEFINE_RAW_SPINLOCK(vmovp_lock);
+ static DEFINE_IDA(its_vpeid_ida);
+ 
+ #define gic_data_rdist()		(raw_cpu_ptr(gic_rdists->rdist))
++#define gic_data_rdist_cpu(cpu)		(per_cpu_ptr(gic_rdists->rdist, cpu))
+ #define gic_data_rdist_rd_base()	(gic_data_rdist()->rd_base)
+ #define gic_data_rdist_vlpi_base()	(gic_data_rdist_rd_base() + SZ_128K)
+ 
+@@ -1827,15 +1828,17 @@ static int its_alloc_collections(struct its_node *its)
+ 	return 0;
+ }
+ 
+-static struct page *its_allocate_pending_table(gfp_t gfp_flags)
++static struct page *its_allocate_pending_table(unsigned int cpu)
+ {
+ 	struct page *pend_page;
++	unsigned int order;
+ 	/*
+ 	 * The pending pages have to be at least 64kB aligned,
+ 	 * hence the 'max(LPI_PENDBASE_SZ, SZ_64K)' below.
+ 	 */
+-	pend_page = alloc_pages(gfp_flags | __GFP_ZERO,
+-				get_order(max_t(u32, LPI_PENDBASE_SZ, SZ_64K)));
++	order = get_order(max_t(u32, LPI_PENDBASE_SZ, SZ_64K));
++	pend_page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_ZERO,
++				     order);
+ 	if (!pend_page)
+ 		return NULL;
+ 
+@@ -1851,6 +1854,28 @@ static void its_free_pending_table(struct page *pt)
+ 		   get_order(max_t(u32, LPI_PENDBASE_SZ, SZ_64K)));
+ }
+ 
++static int its_alloc_pend_page(unsigned int cpu)
++{
++	struct page *pend_page;
++	phys_addr_t paddr;
++
++	pend_page = gic_data_rdist_cpu(cpu)->pend_page;
++	if (pend_page)
++		return 0;
++
++	pend_page = its_allocate_pending_table(cpu);
++	if (!pend_page) {
++		pr_err("Failed to allocate PENDBASE for CPU%d\n",
++		       smp_processor_id());
++		return -ENOMEM;
++	}
++
++	paddr = page_to_phys(pend_page);
++	pr_info("CPU%d: using LPI pending table @%pa\n", cpu, &paddr);
++	gic_data_rdist_cpu(cpu)->pend_page = pend_page;
++	return 0;
++}
++
+ static void its_cpu_init_lpis(void)
+ {
+ 	void __iomem *rbase = gic_data_rdist_rd_base();
+@@ -1859,21 +1884,8 @@ static void its_cpu_init_lpis(void)
+ 
+ 	/* If we didn't allocate the pending table yet, do it now */
+ 	pend_page = gic_data_rdist()->pend_page;
+-	if (!pend_page) {
+-		phys_addr_t paddr;
+-
+-		pend_page = its_allocate_pending_table(GFP_NOWAIT);
+-		if (!pend_page) {
+-			pr_err("Failed to allocate PENDBASE for CPU%d\n",
+-			       smp_processor_id());
+-			return;
+-		}
+-
+-		paddr = page_to_phys(pend_page);
+-		pr_info("CPU%d: using LPI pending table @%pa\n",
+-			smp_processor_id(), &paddr);
+-		gic_data_rdist()->pend_page = pend_page;
+-	}
++	if (!pend_page)
++		return;
+ 
+ 	/* Disable LPIs */
+ 	val = readl_relaxed(rbase + GICR_CTLR);
+@@ -2708,7 +2720,7 @@ static int its_vpe_init(struct its_vpe *vpe)
+ 		return vpe_id;
+ 
+ 	/* Allocate VPT */
+-	vpt_page = its_allocate_pending_table(GFP_KERNEL);
++	vpt_page = its_allocate_pending_table(raw_smp_processor_id());
+ 	if (!vpt_page) {
+ 		its_vpe_id_free(vpe_id);
+ 		return -ENOMEM;
+@@ -3505,6 +3517,16 @@ int __init its_init(struct fwnode_handle *handle, struct rdists *rdists,
+ 	if (err)
+ 		return err;
+ 
++	err = cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "irqchip/arm/gicv3:prepare",
++				its_alloc_pend_page, NULL);
++	if (err < 0) {
++		pr_warn("ITS: Can't register CPU-hoplug callback.\n");
++		return err;
++	}
++	err = its_alloc_pend_page(smp_processor_id());
++	if (err < 0)
++		return err;
++
+ 	list_for_each_entry(its, &its_nodes, entry)
+ 		has_v4 |= its->is_v4;
+ 
+-- 
+2.18.0
+
diff --git a/patches/localversion.patch b/patches/localversion.patch
index 02952cd..e16fb07 100644
--- a/patches/localversion.patch
+++ b/patches/localversion.patch
@@ -10,4 +10,4 @@
 --- /dev/null
 +++ b/localversion-rt
 @@ -0,0 +1 @@
-+-rt9
++-rt10
diff --git a/patches/random-Remove-preempt-disabled-region.patch b/patches/random-Remove-preempt-disabled-region.patch
new file mode 100644
index 0000000..46d8c4c
--- /dev/null
+++ b/patches/random-Remove-preempt-disabled-region.patch
@@ -0,0 +1,47 @@
+From 20690515ca3fd9baa92a3c915c74b14290fef13c Mon Sep 17 00:00:00 2001
+From: Ingo Molnar <mingo@elte.hu>
+Date: Fri, 3 Jul 2009 08:29:30 -0500
+Subject: [PATCH] random: Remove preempt disabled region
+
+No need to keep preemption disabled across the whole function.
+
+mix_pool_bytes() uses a spin_lock() to protect the pool and there are
+other places like write_pool() whhich invoke mix_pool_bytes() without
+disabling preemption.
+credit_entropy_bits() is invoked from other places like
+add_hwgenerator_randomness() without disabling preemption.
+
+Before commit 95b709b6be49 ("random: drop trickle mode") the function
+used __this_cpu_inc_return() which would require disabled preemption.
+The preempt_disable() section was added in commit 43d5d3018c37 ("[PATCH]
+random driver preempt robustness", history tree).  It was claimed that
+the code relied on "vt_ioctl() being called under BKL".
+
+Cc: "Theodore Ts'o" <tytso@mit.edu>
+Signed-off-by: Ingo Molnar <mingo@elte.hu>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+[bigeasy: enhance the commit message]
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ drivers/char/random.c |    3 ---
+ 1 file changed, 3 deletions(-)
+
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -1122,8 +1122,6 @@ static void add_timer_randomness(struct
+ 	} sample;
+ 	long delta, delta2, delta3;
+ 
+-	preempt_disable();
+-
+ 	sample.jiffies = jiffies;
+ 	sample.cycles = random_get_entropy();
+ 	sample.num = num;
+@@ -1164,7 +1162,6 @@ static void add_timer_randomness(struct
+ 		 */
+ 		credit_entropy_bits(r, min_t(int, fls(delta>>1), 11));
+ 	}
+-	preempt_enable();
+ }
+ 
+ void add_input_randomness(unsigned int type, unsigned int code,
diff --git a/patches/sched-core-Remove-get_cpu-from-sched_fork.patch b/patches/sched-core-Remove-get_cpu-from-sched_fork.patch
new file mode 100644
index 0000000..18a5375
--- /dev/null
+++ b/patches/sched-core-Remove-get_cpu-from-sched_fork.patch
@@ -0,0 +1,92 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Fri, 6 Jul 2018 15:06:15 +0200
+Subject: [PATCH] sched/core: Remove get_cpu() from sched_fork()
+
+[ Upstream commit af0fffd9300b97d8875aa745bc78e2f6fdb3c1f0 ]
+
+get_cpu() disables preemption for the entire sched_fork() function.
+This get_cpu() was introduced in commit:
+
+  dd41f596cda0 ("sched: cfs core code")
+
+... which also invoked sched_balance_self() and this function
+required preemption do be off.
+
+Today, sched_balance_self() seems to be moved to ->task_fork callback
+which is invoked while the ->pi_lock is held.
+
+set_load_weight() could invoke reweight_task() which then via $callchain
+might end up in smp_processor_id() but since `update_load' is false
+this won't happen.
+
+I didn't find any this_cpu*() or similar usage during the initialisation
+of the task_struct.
+
+The `cpu' value (from get_cpu()) is only used later in __set_task_cpu()
+while the ->pi_lock lock is held.
+
+Based on this it is possible to remove get_cpu() and use
+smp_processor_id() for the `cpu' variable without breaking anything.
+
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Link: http://lkml.kernel.org/r/20180706130615.g2ex2kmfu5kcvlq6@linutronix.de
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+---
+ kernel/sched/core.c | 13 ++++---------
+ 1 file changed, 4 insertions(+), 9 deletions(-)
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index ba6bb805693a..c3cf7d992159 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -2294,7 +2294,6 @@ static inline void init_schedstats(void) {}
+ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ {
+ 	unsigned long flags;
+-	int cpu = get_cpu();
+ 
+ 	__sched_fork(clone_flags, p);
+ 	/*
+@@ -2330,14 +2329,12 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ 		p->sched_reset_on_fork = 0;
+ 	}
+ 
+-	if (dl_prio(p->prio)) {
+-		put_cpu();
++	if (dl_prio(p->prio))
+ 		return -EAGAIN;
+-	} else if (rt_prio(p->prio)) {
++	else if (rt_prio(p->prio))
+ 		p->sched_class = &rt_sched_class;
+-	} else {
++	else
+ 		p->sched_class = &fair_sched_class;
+-	}
+ 
+ 	init_entity_runnable_average(&p->se);
+ 
+@@ -2353,7 +2350,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ 	 * We're setting the CPU for the first time, we don't migrate,
+ 	 * so use __set_task_cpu().
+ 	 */
+-	__set_task_cpu(p, cpu);
++	__set_task_cpu(p, smp_processor_id());
+ 	if (p->sched_class->task_fork)
+ 		p->sched_class->task_fork(p);
+ 	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
+@@ -2370,8 +2367,6 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
+ 	plist_node_init(&p->pushable_tasks, MAX_PRIO);
+ 	RB_CLEAR_NODE(&p->pushable_dl_tasks);
+ #endif
+-
+-	put_cpu();
+ 	return 0;
+ }
+ 
+-- 
+2.18.0
+
diff --git a/patches/sched-migrate_disable-fallback-to-preempt_disable-in.patch b/patches/sched-migrate_disable-fallback-to-preempt_disable-in.patch
new file mode 100644
index 0000000..1cc542e
--- /dev/null
+++ b/patches/sched-migrate_disable-fallback-to-preempt_disable-in.patch
@@ -0,0 +1,191 @@
+From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Date: Thu, 5 Jul 2018 14:44:51 +0200
+Subject: [PATCH] sched/migrate_disable: fallback to preempt_disable() instead
+ barrier()
+
+On SMP + !RT migrate_disable() is still around. It is not part of spin_lock()
+anymore so it has almost no users. However the futex code has a workaround for
+the !in_atomic() part of migrate disable which fails because the matching
+migrade_disable() is no longer part of spin_lock().
+
+On !SMP + !RT migrate_disable() is reduced to barrier(). This is not optimal
+because we few spots where a "preempt_disable()" statement was replaced with
+"migrate_disable()".
+
+We also used the migration_disable counter to figure out if a sleeping lock is
+acquired so RCU does not complain about schedule() during rcu_read_lock() while
+a sleeping lock is held. This changed, we no longer use it, we have now a
+sleeping_lock counter for the RCU purpose.
+
+This means we can now:
+- for SMP + RT_BASE
+  full migration program, nothing changes here
+
+- for !SMP + RT_BASE
+  the migration counting is no longer required. It used to ensure that the task
+  is not migrated to another CPU and that this CPU remains online. !SMP ensures
+  that already.
+  Move it to CONFIG_SCHED_DEBUG so the counting is done for debugging purpose
+  only.
+
+- for all other cases including !RT
+  fallback to preempt_disable(). The only remaining users of migrate_disable()
+  are those which were converted from preempt_disable() and the futex
+  workaround which is already in the preempt_disable() section due to the
+  spin_lock that is held.
+
+Cc: stable-rt@vger.kernel.org
+Reported-by: joe.korty@concurrent-rt.com
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ include/linux/preempt.h |    6 +++---
+ include/linux/sched.h   |    4 ++--
+ kernel/sched/core.c     |   23 +++++++++++------------
+ kernel/sched/debug.c    |    2 +-
+ 4 files changed, 17 insertions(+), 18 deletions(-)
+
+--- a/include/linux/preempt.h
++++ b/include/linux/preempt.h
+@@ -204,7 +204,7 @@ do { \
+ 
+ #define preemptible()	(preempt_count() == 0 && !irqs_disabled())
+ 
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+ 
+ extern void migrate_disable(void);
+ extern void migrate_enable(void);
+@@ -221,8 +221,8 @@ static inline int __migrate_disabled(str
+ }
+ 
+ #else
+-#define migrate_disable()		barrier()
+-#define migrate_enable()		barrier()
++#define migrate_disable()		preempt_disable()
++#define migrate_enable()		preempt_enable()
+ static inline int __migrate_disabled(struct task_struct *p)
+ {
+ 	return 0;
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -645,7 +645,7 @@ struct task_struct {
+ 	int				nr_cpus_allowed;
+ 	const cpumask_t			*cpus_ptr;
+ 	cpumask_t			cpus_mask;
+-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
++#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+ 	int				migrate_disable;
+ 	int				migrate_disable_update;
+ # ifdef CONFIG_SCHED_DEBUG
+@@ -653,8 +653,8 @@ struct task_struct {
+ # endif
+ 
+ #elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+-	int				migrate_disable;
+ # ifdef CONFIG_SCHED_DEBUG
++	int				migrate_disable;
+ 	int				migrate_disable_atomic;
+ # endif
+ #endif
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1059,7 +1059,7 @@ void set_cpus_allowed_common(struct task
+ 	p->nr_cpus_allowed = cpumask_weight(new_mask);
+ }
+ 
+-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
++#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+ int __migrate_disabled(struct task_struct *p)
+ {
+ 	return p->migrate_disable;
+@@ -1098,7 +1098,7 @@ static void __do_set_cpus_allowed_tail(s
+ 
+ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
+ {
+-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
++#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+ 	if (__migrate_disabled(p)) {
+ 		lockdep_assert_held(&p->pi_lock);
+ 
+@@ -1171,7 +1171,7 @@ static int __set_cpus_allowed_ptr(struct
+ 	if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p))
+ 		goto out;
+ 
+-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
++#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+ 	if (__migrate_disabled(p)) {
+ 		p->migrate_disable_update = 1;
+ 		goto out;
+@@ -7134,7 +7134,7 @@ const u32 sched_prio_to_wmult[40] = {
+  /*  15 */ 119304647, 148102320, 186737708, 238609294, 286331153,
+ };
+ 
+-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
++#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+ 
+ static inline void
+ update_nr_migratory(struct task_struct *p, long delta)
+@@ -7282,45 +7282,44 @@ EXPORT_SYMBOL(migrate_enable);
+ #elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+ void migrate_disable(void)
+ {
++#ifdef CONFIG_SCHED_DEBUG
+ 	struct task_struct *p = current;
+ 
+ 	if (in_atomic() || irqs_disabled()) {
+-#ifdef CONFIG_SCHED_DEBUG
+ 		p->migrate_disable_atomic++;
+-#endif
+ 		return;
+ 	}
+-#ifdef CONFIG_SCHED_DEBUG
++
+ 	if (unlikely(p->migrate_disable_atomic)) {
+ 		tracing_off();
+ 		WARN_ON_ONCE(1);
+ 	}
+-#endif
+ 
+ 	p->migrate_disable++;
++#endif
++	barrier();
+ }
+ EXPORT_SYMBOL(migrate_disable);
+ 
+ void migrate_enable(void)
+ {
++#ifdef CONFIG_SCHED_DEBUG
+ 	struct task_struct *p = current;
+ 
+ 	if (in_atomic() || irqs_disabled()) {
+-#ifdef CONFIG_SCHED_DEBUG
+ 		p->migrate_disable_atomic--;
+-#endif
+ 		return;
+ 	}
+ 
+-#ifdef CONFIG_SCHED_DEBUG
+ 	if (unlikely(p->migrate_disable_atomic)) {
+ 		tracing_off();
+ 		WARN_ON_ONCE(1);
+ 	}
+-#endif
+ 
+ 	WARN_ON_ONCE(p->migrate_disable <= 0);
+ 	p->migrate_disable--;
++#endif
++	barrier();
+ }
+ EXPORT_SYMBOL(migrate_enable);
+ #endif
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -1030,7 +1030,7 @@ void proc_sched_show_task(struct task_st
+ 		P(dl.runtime);
+ 		P(dl.deadline);
+ 	}
+-#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
++#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
+ 	P(migrate_disable);
+ #endif
+ 	P(nr_cpus_allowed);
diff --git a/patches/series b/patches/series
index 2eccc15..53a8546 100644
--- a/patches/series
+++ b/patches/series
@@ -128,6 +128,8 @@
 IB-ipoib-replace-local_irq_disable-with-proper-locki.patch
 SCSI-libsas-remove-irq-save-in-sas_ata_qc_issue.patch
 posix-cpu-timers-remove-lockdep_assert_irqs_disabled.patch
+cgroup-tracing-Move-taking-of-spin-lock-out-of-trace.patch
+sched-core-Remove-get_cpu-from-sched_fork.patch
 
 ############################################################
 # POSTED by others
@@ -147,23 +149,26 @@
 ############################################################
 Revert-mm-vmstat.c-fix-vmstat_update-preemption-BUG.patch
 arm-convert-boot-lock-to-raw.patch
-x86-io-apic-migra-no-unmask.patch
+x86-ioapic-Don-t-let-setaffinity-unmask-threaded-EOI.patch
 arm-kprobe-replace-patch_lock-to-raw-lock.patch
 arm-unwind-use_raw_lock.patch
 
 ############################################################
 # Ready for posting
 ############################################################
+irqchip-gic-v3-its-Make-its_lock-a-raw_spin_lock_t.patch
+irqchip-gic-v3-its-Move-ITS-pend_page-allocation-int.patch
 
 ############################################################
 # Needs to address review feedback
 ############################################################
 posix-timers-no-broadcast.patch
+Revert-posix-timers-Prevent-broadcast-signals.patch
 
 ############################################################
 # Almost ready, needs final polishing
 ############################################################
-drivers-random-reduce-preempt-disabled-region.patch
+random-Remove-preempt-disabled-region.patch
 mm-page_alloc-rt-friendly-per-cpu-pages.patch
 mm-page_alloc-reduce-lock-sections-further.patch
 
@@ -389,6 +394,7 @@
 RCU-skip-the-schedule-in-RCU-section-warning-on-UP-t.patch
 rtmutex-annotate-sleeping-lock-context.patch
 locking-don-t-check-for-__LINUX_SPINLOCK_TYPES_H-on-.patch
+sched-migrate_disable-fallback-to-preempt_disable-in.patch
 
 # RCU
 peter_zijlstra-frob-rcu.patch
diff --git a/patches/x86-io-apic-migra-no-unmask.patch b/patches/x86-io-apic-migra-no-unmask.patch
deleted file mode 100644
index 6c6a51d..0000000
--- a/patches/x86-io-apic-migra-no-unmask.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-From: Ingo Molnar <mingo@elte.hu>
-Date: Fri, 3 Jul 2009 08:29:27 -0500
-Subject: x86/ioapic: Do not unmask io_apic when interrupt is in progress
-
-With threaded interrupts we might see an interrupt in progress on
-migration. Do not unmask it when this is the case.
-
-Signed-off-by: Ingo Molnar <mingo@elte.hu>
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-
----
-xXx
- arch/x86/kernel/apic/io_apic.c |    3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
---- a/arch/x86/kernel/apic/io_apic.c
-+++ b/arch/x86/kernel/apic/io_apic.c
-@@ -1732,7 +1732,8 @@ static bool io_apic_level_ack_pending(st
- static inline bool ioapic_irqd_mask(struct irq_data *data)
- {
- 	/* If we are moving the irq we need to mask it */
--	if (unlikely(irqd_is_setaffinity_pending(data))) {
-+	if (unlikely(irqd_is_setaffinity_pending(data) &&
-+		     !irqd_irq_inprogress(data))) {
- 		mask_ioapic_irq(data);
- 		return true;
- 	}
diff --git a/patches/x86-ioapic-Don-t-let-setaffinity-unmask-threaded-EOI.patch b/patches/x86-ioapic-Don-t-let-setaffinity-unmask-threaded-EOI.patch
new file mode 100644
index 0000000..83ed41d
--- /dev/null
+++ b/patches/x86-ioapic-Don-t-let-setaffinity-unmask-threaded-EOI.patch
@@ -0,0 +1,110 @@
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Tue, 17 Jul 2018 18:25:31 +0200
+Subject: [PATCH] x86/ioapic: Don't let setaffinity unmask threaded EOI
+ interrupt too early
+
+There is an issue with threaded interrupts which are marked ONESHOT
+and using the fasteoi handler.
+
+    if (IS_ONESHOT())
+        mask_irq();
+
+    ....
+    ....
+
+    cond_unmask_eoi_irq()
+        chip->irq_eoi();
+
+So if setaffinity is pending then the interrupt will be moved and then
+unmasked, which is wrong as it should be kept masked up to the point where
+the threaded handler finished. It's not a real problem, the interrupt will
+just be able to fire before the threaded handler has finished, though the irq
+masked state will be wrong for a bit.
+
+The patch below should cure the issue. It also renames the horribly
+misnomed functions so it becomes clear what they are supposed to do.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+[bigeasy: add the body of the patch, use the same functions in both
+          ifdef paths (spotted by Andy Shevchenko)]
+Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+---
+ arch/x86/kernel/apic/io_apic.c | 23 +++++++++++++----------
+ 1 file changed, 13 insertions(+), 10 deletions(-)
+
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 71e912d73c3d..75cff68944e6 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -1729,19 +1729,20 @@ static bool io_apic_level_ack_pending(struct mp_chip_data *data)
+ 	return false;
+ }
+ 
+-static inline bool ioapic_irqd_mask(struct irq_data *data)
++static inline bool ioapic_prepare_move(struct irq_data *data)
+ {
+ 	/* If we are moving the irq we need to mask it */
+ 	if (unlikely(irqd_is_setaffinity_pending(data))) {
+-		mask_ioapic_irq(data);
++		if (!irqd_irq_masked(data))
++			mask_ioapic_irq(data);
+ 		return true;
+ 	}
+ 	return false;
+ }
+ 
+-static inline void ioapic_irqd_unmask(struct irq_data *data, bool masked)
++static inline void ioapic_finish_move(struct irq_data *data, bool moveit)
+ {
+-	if (unlikely(masked)) {
++	if (unlikely(moveit)) {
+ 		/* Only migrate the irq if the ack has been received.
+ 		 *
+ 		 * On rare occasions the broadcast level triggered ack gets
+@@ -1770,15 +1771,17 @@ static inline void ioapic_irqd_unmask(struct irq_data *data, bool masked)
+ 		 */
+ 		if (!io_apic_level_ack_pending(data->chip_data))
+ 			irq_move_masked_irq(data);
+-		unmask_ioapic_irq(data);
++		/* If the irq is masked in the core, leave it */
++		if (!irqd_irq_masked(data))
++			unmask_ioapic_irq(data);
+ 	}
+ }
+ #else
+-static inline bool ioapic_irqd_mask(struct irq_data *data)
++static inline bool ioapic_prepare_move(struct irq_data *data)
+ {
+ 	return false;
+ }
+-static inline void ioapic_irqd_unmask(struct irq_data *data, bool masked)
++static inline void ioapic_finish_move(struct irq_data *data, bool moveit)
+ {
+ }
+ #endif
+@@ -1787,11 +1790,11 @@ static void ioapic_ack_level(struct irq_data *irq_data)
+ {
+ 	struct irq_cfg *cfg = irqd_cfg(irq_data);
+ 	unsigned long v;
+-	bool masked;
++	bool moveit;
+ 	int i;
+ 
+ 	irq_complete_move(cfg);
+-	masked = ioapic_irqd_mask(irq_data);
++	moveit = ioapic_prepare_move(irq_data);
+ 
+ 	/*
+ 	 * It appears there is an erratum which affects at least version 0x11
+@@ -1846,7 +1849,7 @@ static void ioapic_ack_level(struct irq_data *irq_data)
+ 		eoi_ioapic_pin(cfg->vector, irq_data->chip_data);
+ 	}
+ 
+-	ioapic_irqd_unmask(irq_data, masked);
++	ioapic_finish_move(irq_data, moveit);
+ }
+ 
+ static void ioapic_ir_ack_level(struct irq_data *irq_data)
+-- 
+2.18.0
+