Make /dev/urandom scalable

We had a case where a 4 socket system spent >80% of its total CPU time
contending on the global urandom nonblocking pool spinlock. While the
application could probably have used an own PRNG, it may have valid
reasons to use the best possible key for different session keys.

The application still ran acceptable under 2S, but just fell over
the locking cliff on 4S.

Implementation
==============

The non blocking pool is used widely these days, from every execve() (to
set up AT_RANDOM for ld.so randomization), to getrandom(3) and to frequent
/dev/urandom users in user space. Clearly having such a popular resource
under a global lock is bad thing.

This patch changes the random driver to use distributed per NUMA node
nonblocking pools. The basic structure is not changed: entropy is
first fed into the input pool and later from there distributed
round-robin into the blocking and non blocking pools. This patch extends
this to use an dedicated non blocking pool for each node, and distribute
evenly from the input pool into these distributed pools, in
addition to the blocking pool.

Then every urandom/getrandom user fetches data from its node local
pool. At boot time when users may be still waiting for the non
blocking pool initialization we use the node 0 non blocking pool,
to avoid the need for different wake up queues.

For single node systems (like the vast majority of non server systems)
nothing changes. There is still only a single non blocking pool.

For other systems the original nonblocking pool is used until this
pool has 128 bits worth of entropy. After that it is used
to initialize the other pools. This already gives them
different init states, so they don't run in lock-step
to avoid "replay" attacks.

Since we still have a global input pool there are no problems
with load balancing entropy data between nodes. Any node that never
runs any interrupts would still get the same amount of entropy as
other nodes.

Entropy is fed preferably to nodes that need it more using
the existing 75% threshold.

For saving/restoring /dev/urandom, there is currently no mechanism
to access the non local node pool (short of setting task affinity).
This implies that currently the standard init/exit random save/restore
scripts would only save node 0. On restore all pools are updates.
So the entropy of non 0 gets lost over reboot. That seems acceptable
to me for now (fixing this would need a new separate save/restore interface)

Scalability
===========

I tested the patch with a simple will-it-scale test banging
on get_random() in parallel on more and more CPUs. Of course
that is not a realistic scenario, as real programs should
do some work between getting random numbers. But it's a worst
case for the random scalability.

On a 4S Xeon v3 system _without_ the patchkit the benchmark
maxes out when using all the threads on one node. After
that it quickly settles to about half the throughput of
one node with 2-4 nodes.

(all throughput factors, bigger is better)
Without patchkit:

1 node:  1x
2 nodes: 0.75x
3 nodes: 0.55x
4 nodes: 0.42x

With the patchkit applied:

1 node:  1x
2 nodes: 2x
3 nodes: 2.4x
4 nodes: 3x

So it's not quite linear scalability, but 3x maximum throughput
is already a lot better.

A node can still have a large number of CPUs: on my test system 36
logical software threads (18C * 2T). In principle it may make
sense to split it up further. Per logical CPU would be clearly
overkill. But that would also add more pressure on the input
pools. For now per node seems like a acceptable compromise.

/dev/random still uses a single global lock. For now that seems
acceptable as it normally cannot be used for real high volume
accesses anyways.

The input pool also still uses a global lock. The existing per CPU
fast pool and "give up when busy" mechanism seems to scale well enough
even on larger systems.

v2: Fix name of pool 0. Fix race with interrupts. Make
iteration loops slightly more efficient. Add ifdefs to avoid
any extra code on non-NUMA. Delay other pool use to when
the original pool initialized and initialize the pools from
pool 0. Add comments on memory allocation.

v3: Minor changes from review. Consistent pool names.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
1 file changed