Merge tag 'for-5.9-rc4-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux

Pull btrfs fixes from David Sterba:
 "A few more fixes:

   - regression fix for a crash after failed snapshot creation

   - one more lockep fix: use nofs allocation when allocating missing
     device

   - fix reloc tree leak on degraded mount

   - make some extent buffer alignment checks less strict to mount
     filesystems created by btrfs-convert"

* tag 'for-5.9-rc4-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
  btrfs: fix NULL pointer dereference after failure to create snapshot
  btrfs: free data reloc tree on failed mount
  btrfs: require only sector size alignment for parent eb bytenr
  btrfs: fix lockdep splat in add_missing_dev
diff --git a/.clang-format b/.clang-format
index a0a9608..badfc1b 100644
--- a/.clang-format
+++ b/.clang-format
@@ -111,6 +111,7 @@
   - 'css_for_each_descendant_pre'
   - 'device_for_each_child_node'
   - 'dma_fence_chain_for_each'
+  - 'do_for_each_ftrace_op'
   - 'drm_atomic_crtc_for_each_plane'
   - 'drm_atomic_crtc_state_for_each_plane'
   - 'drm_atomic_crtc_state_for_each_plane_state'
@@ -136,6 +137,7 @@
   - 'for_each_active_dev_scope'
   - 'for_each_active_drhd_unit'
   - 'for_each_active_iommu'
+  - 'for_each_aggr_pgid'
   - 'for_each_available_child_of_node'
   - 'for_each_bio'
   - 'for_each_board_func_rsrc'
@@ -234,6 +236,7 @@
   - 'for_each_node_state'
   - 'for_each_node_with_cpus'
   - 'for_each_node_with_property'
+  - 'for_each_nonreserved_multicast_dest_pgid'
   - 'for_each_of_allnodes'
   - 'for_each_of_allnodes_from'
   - 'for_each_of_cpu_node'
@@ -256,6 +259,7 @@
   - 'for_each_pci_dev'
   - 'for_each_pci_msi_entry'
   - 'for_each_pcm_streams'
+  - 'for_each_physmem_range'
   - 'for_each_populated_zone'
   - 'for_each_possible_cpu'
   - 'for_each_present_cpu'
@@ -265,6 +269,8 @@
   - 'for_each_process_thread'
   - 'for_each_property_of_node'
   - 'for_each_registered_fb'
+  - 'for_each_requested_gpio'
+  - 'for_each_requested_gpio_in_range'
   - 'for_each_reserved_mem_region'
   - 'for_each_rtd_codec_dais'
   - 'for_each_rtd_codec_dais_rollback'
@@ -278,12 +284,17 @@
   - 'for_each_sg'
   - 'for_each_sg_dma_page'
   - 'for_each_sg_page'
+  - 'for_each_sgtable_dma_page'
+  - 'for_each_sgtable_dma_sg'
+  - 'for_each_sgtable_page'
+  - 'for_each_sgtable_sg'
   - 'for_each_sibling_event'
   - 'for_each_subelement'
   - 'for_each_subelement_extid'
   - 'for_each_subelement_id'
   - '__for_each_thread'
   - 'for_each_thread'
+  - 'for_each_unicast_dest_pgid'
   - 'for_each_wakeup_source'
   - 'for_each_zone'
   - 'for_each_zone_zonelist'
@@ -464,6 +475,7 @@
   - 'v4l2_m2m_for_each_src_buf'
   - 'v4l2_m2m_for_each_src_buf_safe'
   - 'virtio_device_for_each_vq'
+  - 'while_for_each_ftrace_op'
   - 'xa_for_each'
   - 'xa_for_each_marked'
   - 'xa_for_each_range'
diff --git a/.gitignore b/.gitignore
index d5f4804..162bd2b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -44,6 +44,7 @@
 *.tab.[ch]
 *.tar
 *.xz
+*.zst
 Module.symvers
 modules.builtin
 modules.order
diff --git a/.mailmap b/.mailmap
index db4f229..332c783 100644
--- a/.mailmap
+++ b/.mailmap
@@ -2,35 +2,44 @@
 # This list is used by git-shortlog to fix a few botched name translations
 # in the git archive, either because the author's full name was messed up
 # and/or not always written the same way, making contributions from the
-# same person appearing not to be so or badly displayed.
+# same person appearing not to be so or badly displayed. Also allows for
+# old email addresses to map to new email addresses.
 #
+# For format details, see "MAPPING AUTHORS" in "man git-shortlog".
+#
+# Please keep this list dictionary sorted.
+#
+# This comment is parsed by git-shortlog:
 # repo-abbrev: /pub/scm/linux/kernel/git/
 #
-
 Aaron Durbin <adurbin@google.com>
 Adam Oldham <oldhamca@gmail.com>
 Adam Radford <aradford@gmail.com>
-Adrian Bunk <bunk@stusta.de>
 Adriana Reus <adi.reus@gmail.com> <adriana.reus@intel.com>
+Adrian Bunk <bunk@stusta.de>
 Alan Cox <alan@lxorguk.ukuu.org.uk>
 Alan Cox <root@hraefn.swansea.linux.org.uk>
-Aleksey Gorelov <aleksey_gorelov@phoenix.com>
 Aleksandar Markovic <aleksandar.markovic@mips.com> <aleksandar.markovic@imgtec.com>
-Alex Shi <alex.shi@linux.alibaba.com> <alex.shi@intel.com>
-Alex Shi <alex.shi@linux.alibaba.com> <alex.shi@linaro.org>
+Aleksey Gorelov <aleksey_gorelov@phoenix.com>
+Alexander Lobakin <alobakin@pm.me> <alobakin@dlink.ru>
+Alexander Lobakin <alobakin@pm.me> <alobakin@marvell.com>
+Alexander Lobakin <alobakin@pm.me> <bloodyreaper@yandex.ru>
 Alexandre Belloni <alexandre.belloni@bootlin.com> <alexandre.belloni@free-electrons.com>
-Alexei Starovoitov <ast@kernel.org> <ast@plumgrid.com>
 Alexei Starovoitov <ast@kernel.org> <alexei.starovoitov@gmail.com>
 Alexei Starovoitov <ast@kernel.org> <ast@fb.com>
+Alexei Starovoitov <ast@kernel.org> <ast@plumgrid.com>
+Alex Shi <alex.shi@linux.alibaba.com> <alex.shi@intel.com>
+Alex Shi <alex.shi@linux.alibaba.com> <alex.shi@linaro.org>
 Al Viro <viro@ftp.linux.org.uk>
 Al Viro <viro@zenIV.linux.org.uk>
+Andi Kleen <ak@linux.intel.com> <ak@suse.de>
 Andi Shyti <andi@etezian.org> <andi.shyti@samsung.com>
 Andreas Herrmann <aherrman@de.ibm.com>
-Andrey Ryabinin <ryabinin.a.a@gmail.com> <a.ryabinin@samsung.com>
 Andrew Morton <akpm@linux-foundation.org>
-Andrew Murray <amurray@thegoodpenguin.co.uk> <andrew.murray@arm.com>
 Andrew Murray <amurray@thegoodpenguin.co.uk> <amurray@embedded-bits.co.uk>
+Andrew Murray <amurray@thegoodpenguin.co.uk> <andrew.murray@arm.com>
 Andrew Vasquez <andrew.vasquez@qlogic.com>
+Andrey Ryabinin <ryabinin.a.a@gmail.com> <a.ryabinin@samsung.com>
 Andy Adamson <andros@citi.umich.edu>
 Antoine Tenart <antoine.tenart@free-electrons.com>
 Antonio Ospite <ao2@ao2.it> <ao2@amarulasolutions.com>
@@ -40,40 +49,42 @@
 Arnd Bergmann <arnd@arndb.de>
 Axel Dyks <xl@xlsigned.net>
 Axel Lin <axel.lin@gmail.com>
-Bart Van Assche <bvanassche@acm.org> <bart.vanassche@wdc.com>
 Bart Van Assche <bvanassche@acm.org> <bart.vanassche@sandisk.com>
+Bart Van Assche <bvanassche@acm.org> <bart.vanassche@wdc.com>
 Ben Gardner <bgardner@wabtec.com>
 Ben M Cahill <ben.m.cahill@intel.com>
 Björn Steinbrink <B.Steinbrink@gmx.de>
-Boris Brezillon <bbrezillon@kernel.org> <boris.brezillon@bootlin.com>
-Boris Brezillon <bbrezillon@kernel.org> <boris.brezillon@free-electrons.com>
 Boris Brezillon <bbrezillon@kernel.org> <b.brezillon.dev@gmail.com>
 Boris Brezillon <bbrezillon@kernel.org> <b.brezillon@overkiz.com>
+Boris Brezillon <bbrezillon@kernel.org> <boris.brezillon@bootlin.com>
+Boris Brezillon <bbrezillon@kernel.org> <boris.brezillon@free-electrons.com>
 Brian Avery <b.avery@hp.com>
 Brian King <brking@us.ibm.com>
+Changbin Du <changbin.du@intel.com> <changbin.du@gmail.com>
+Changbin Du <changbin.du@intel.com> <changbin.du@intel.com>
 Chao Yu <chao@kernel.org> <chao2.yu@samsung.com>
 Chao Yu <chao@kernel.org> <yuchao0@huawei.com>
-Christoph Hellwig <hch@lst.de>
 Christophe Ricard <christophe.ricard@gmail.com>
+Christoph Hellwig <hch@lst.de>
 Corey Minyard <minyard@acm.org>
 Damian Hobson-Garcia <dhobsong@igel.co.jp>
-Daniel Borkmann <daniel@iogearbox.net> <dborkman@redhat.com>
-Daniel Borkmann <daniel@iogearbox.net> <dborkmann@redhat.com>
+Daniel Borkmann <daniel@iogearbox.net> <danborkmann@googlemail.com>
 Daniel Borkmann <daniel@iogearbox.net> <danborkmann@iogearbox.net>
 Daniel Borkmann <daniel@iogearbox.net> <daniel.borkmann@tik.ee.ethz.ch>
-Daniel Borkmann <daniel@iogearbox.net> <danborkmann@googlemail.com>
+Daniel Borkmann <daniel@iogearbox.net> <dborkmann@redhat.com>
+Daniel Borkmann <daniel@iogearbox.net> <dborkman@redhat.com>
 Daniel Borkmann <daniel@iogearbox.net> <dxchgb@gmail.com>
 David Brownell <david-b@pacbell.net>
 David Woodhouse <dwmw2@shinybook.infradead.org>
-Dengcheng Zhu <dzhu@wavecomp.com> <dengcheng.zhu@mips.com>
-Dengcheng Zhu <dzhu@wavecomp.com> <dengcheng.zhu@imgtec.com>
 Dengcheng Zhu <dzhu@wavecomp.com> <dczhu@mips.com>
 Dengcheng Zhu <dzhu@wavecomp.com> <dengcheng.zhu@gmail.com>
+Dengcheng Zhu <dzhu@wavecomp.com> <dengcheng.zhu@imgtec.com>
+Dengcheng Zhu <dzhu@wavecomp.com> <dengcheng.zhu@mips.com>
 <dev.kurt@vandijck-laurijssen.be> <kurt.van.dijck@eia.be>
 Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
-Dmitry Safonov <0x7f454c46@gmail.com> <dsafonov@virtuozzo.com>
-Dmitry Safonov <0x7f454c46@gmail.com> <d.safonov@partner.samsung.com>
 Dmitry Safonov <0x7f454c46@gmail.com> <dima@arista.com>
+Dmitry Safonov <0x7f454c46@gmail.com> <d.safonov@partner.samsung.com>
+Dmitry Safonov <0x7f454c46@gmail.com> <dsafonov@virtuozzo.com>
 Domen Puncer <domen@coderock.org>
 Douglas Gilbert <dougg@torque.net>
 Ed L. Cashin <ecashin@coraid.com>
@@ -84,19 +95,22 @@
 Felix Moeller <felix@derklecks.de>
 Filipe Lautert <filipe@icewall.org>
 Franck Bui-Huu <vagabon.xyz@gmail.com>
-Frank Rowand <frowand.list@gmail.com> <frowand@mvista.com>
 Frank Rowand <frowand.list@gmail.com> <frank.rowand@am.sony.com>
 Frank Rowand <frowand.list@gmail.com> <frank.rowand@sonymobile.com>
+Frank Rowand <frowand.list@gmail.com> <frowand@mvista.com>
 Frank Zago <fzago@systemfabricworks.com>
 Gao Xiang <xiang@kernel.org> <gaoxiang25@huawei.com>
 Gao Xiang <xiang@kernel.org> <hsiangkao@aol.com>
-Gerald Schaefer <gerald.schaefer@linux.ibm.com> <gerald.schaefer@de.ibm.com>
 Gerald Schaefer <gerald.schaefer@linux.ibm.com> <geraldsc@de.ibm.com>
+Gerald Schaefer <gerald.schaefer@linux.ibm.com> <gerald.schaefer@de.ibm.com>
 Gerald Schaefer <gerald.schaefer@linux.ibm.com> <geraldsc@linux.vnet.ibm.com>
 Greg Kroah-Hartman <greg@echidna.(none)>
 Greg Kroah-Hartman <gregkh@suse.de>
 Greg Kroah-Hartman <greg@kroah.com>
+Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
 Gregory CLEMENT <gregory.clement@bootlin.com> <gregory.clement@free-electrons.com>
+Gustavo Padovan <gustavo@las.ic.unicamp.br>
+Gustavo Padovan <padovan@profusion.mobi>
 Hanjun Guo <guohanjun@huawei.com> <hanjun.guo@linaro.org>
 Heiko Carstens <hca@linux.ibm.com> <h.carstens@de.ibm.com>
 Heiko Carstens <hca@linux.ibm.com> <heiko.carstens@de.ibm.com>
@@ -106,34 +120,40 @@
 Herbert Xu <herbert@gondor.apana.org.au>
 Jacob Shin <Jacob.Shin@amd.com>
 Jaegeuk Kim <jaegeuk@kernel.org> <jaegeuk@google.com>
-Jaegeuk Kim <jaegeuk@kernel.org> <jaegeuk@motorola.com>
 Jaegeuk Kim <jaegeuk@kernel.org> <jaegeuk.kim@samsung.com>
+Jaegeuk Kim <jaegeuk@kernel.org> <jaegeuk@motorola.com>
 Jakub Kicinski <kuba@kernel.org> <jakub.kicinski@netronome.com>
 James Bottomley <jejb@mulgrave.(none)>
 James Bottomley <jejb@titanic.il.steeleye.com>
 James E Wilson <wilson@specifix.com>
-James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
 James Hogan <jhogan@kernel.org> <james@albanarts.com>
+James Hogan <jhogan@kernel.org> <james.hogan@imgtec.com>
 James Ketrenos <jketreno@io.(none)>
 Jan Glauber <jan.glauber@gmail.com> <jang@de.ibm.com>
 Jan Glauber <jan.glauber@gmail.com> <jang@linux.vnet.ibm.com>
 Jan Glauber <jan.glauber@gmail.com> <jglauber@cavium.com>
 Jason Gunthorpe <jgg@ziepe.ca> <jgg@mellanox.com>
+Jason Gunthorpe <jgg@ziepe.ca> <jgg@nvidia.com>
 Jason Gunthorpe <jgg@ziepe.ca> <jgunthorpe@obsidianresearch.com>
-Javi Merino <javi.merino@kernel.org> <javi.merino@arm.com>
 <javier@osg.samsung.com> <javier.martinez@collabora.co.uk>
+Javi Merino <javi.merino@kernel.org> <javi.merino@arm.com>
 Jayachandran C <c.jayachandran@gmail.com> <jayachandranc@netlogicmicro.com>
 Jayachandran C <c.jayachandran@gmail.com> <jchandra@broadcom.com>
 Jayachandran C <c.jayachandran@gmail.com> <jchandra@digeo.com>
 Jayachandran C <c.jayachandran@gmail.com> <jnair@caviumnetworks.com>
-Jean Tourrilhes <jt@hpl.hp.com>
 <jean-philippe@linaro.org> <jean-philippe.brucker@arm.com>
+Jean Tourrilhes <jt@hpl.hp.com>
 Jeff Garzik <jgarzik@pretzel.yyz.us>
-Jeff Layton <jlayton@kernel.org> <jlayton@redhat.com>
 Jeff Layton <jlayton@kernel.org> <jlayton@poochiereds.net>
 Jeff Layton <jlayton@kernel.org> <jlayton@primarydata.com>
+Jeff Layton <jlayton@kernel.org> <jlayton@redhat.com>
 Jens Axboe <axboe@suse.de>
 Jens Osterkamp <Jens.Osterkamp@de.ibm.com>
+Jiri Slaby <jirislaby@kernel.org> <jirislaby@gmail.com>
+Jiri Slaby <jirislaby@kernel.org> <jslaby@novell.com>
+Jiri Slaby <jirislaby@kernel.org> <jslaby@suse.com>
+Jiri Slaby <jirislaby@kernel.org> <jslaby@suse.cz>
+Jiri Slaby <jirislaby@kernel.org> <xslaby@fi.muni.cz>
 Johan Hovold <johan@kernel.org> <jhovold@gmail.com>
 Johan Hovold <johan@kernel.org> <johan@hovoldconsulting.com>
 John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
@@ -150,29 +170,31 @@
 Kamil Konieczny <k.konieczny@samsung.com> <k.konieczny@partner.samsung.com>
 Kay Sievers <kay.sievers@vrfy.org>
 Kenneth W Chen <kenneth.w.chen@intel.com>
+Konstantin Khlebnikov <koct9i@gmail.com> <khlebnikov@yandex-team.ru>
 Konstantin Khlebnikov <koct9i@gmail.com> <k.khlebnikov@samsung.com>
 Koushik <raghavendra.koushik@neterion.com>
-Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski@samsung.com>
 Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com>
+Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski@samsung.com>
 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
-Leon Romanovsky <leon@kernel.org> <leon@leon.nu>
-Leon Romanovsky <leon@kernel.org> <leonro@mellanox.com>
 Leonardo Bras <leobras.c@gmail.com> <leonardo@linux.ibm.com>
 Leonid I Ananiev <leonid.i.ananiev@intel.com>
+Leon Romanovsky <leon@kernel.org> <leon@leon.nu>
+Leon Romanovsky <leon@kernel.org> <leonro@mellanox.com>
+Leon Romanovsky <leon@kernel.org> <leonro@nvidia.com>
 Linas Vepstas <linas@austin.ibm.com>
-Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@web.de>
 Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@ascom.ch>
-Li Yang <leoyang.li@nxp.com> <leo@zh-kernel.org>
+Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@web.de>
 Li Yang <leoyang.li@nxp.com> <leoli@freescale.com>
+Li Yang <leoyang.li@nxp.com> <leo@zh-kernel.org>
 Lukasz Luba <lukasz.luba@arm.com> <l.luba@partner.samsung.com>
 Maciej W. Rozycki <macro@mips.com> <macro@imgtec.com>
-Marc Zyngier <maz@kernel.org> <marc.zyngier@arm.com>
 Marcin Nowakowski <marcin.nowakowski@mips.com> <marcin.nowakowski@imgtec.com>
+Marc Zyngier <maz@kernel.org> <marc.zyngier@arm.com>
 Mark Brown <broonie@sirena.org.uk>
 Mark Yao <markyao0591@gmail.com> <mark.yao@rock-chips.com>
-Martin Kepplinger <martink@posteo.de> <martin.kepplinger@theobroma-systems.com>
 Martin Kepplinger <martink@posteo.de> <martin.kepplinger@ginzinger.com>
 Martin Kepplinger <martink@posteo.de> <martin.kepplinger@puri.sm>
+Martin Kepplinger <martink@posteo.de> <martin.kepplinger@theobroma-systems.com>
 Mathieu Othacehe <m.othacehe@gmail.com>
 Matthew Wilcox <willy@infradead.org> <matthew.r.wilcox@intel.com>
 Matthew Wilcox <willy@infradead.org> <matthew@wil.cx>
@@ -182,17 +204,17 @@
 Matthew Wilcox <willy@infradead.org> <willy@linux.intel.com>
 Matthew Wilcox <willy@infradead.org> <willy@parisc-linux.org>
 Matthieu CASTET <castet.matthieu@free.fr>
-Mauro Carvalho Chehab <mchehab@kernel.org> <mchehab@brturbo.com.br>
-Mauro Carvalho Chehab <mchehab@kernel.org> <maurochehab@gmail.com>
-Mauro Carvalho Chehab <mchehab@kernel.org> <mchehab@infradead.org>
-Mauro Carvalho Chehab <mchehab@kernel.org> <mchehab@redhat.com>
-Mauro Carvalho Chehab <mchehab@kernel.org> <m.chehab@samsung.com>
-Mauro Carvalho Chehab <mchehab@kernel.org> <mchehab@osg.samsung.com>
-Mauro Carvalho Chehab <mchehab@kernel.org> <mchehab@s-opensource.com>
+Matt Ranostay <matt.ranostay@konsulko.com> <matt@ranostay.consulting>
 Matt Ranostay <mranostay@gmail.com> Matthew Ranostay <mranostay@embeddedalley.com>
 Matt Ranostay <mranostay@gmail.com> <matt.ranostay@intel.com>
-Matt Ranostay <matt.ranostay@konsulko.com> <matt@ranostay.consulting>
 Matt Redfearn <matt.redfearn@mips.com> <matt.redfearn@imgtec.com>
+Mauro Carvalho Chehab <mchehab@kernel.org> <maurochehab@gmail.com>
+Mauro Carvalho Chehab <mchehab@kernel.org> <mchehab@brturbo.com.br>
+Mauro Carvalho Chehab <mchehab@kernel.org> <mchehab@infradead.org>
+Mauro Carvalho Chehab <mchehab@kernel.org> <mchehab@osg.samsung.com>
+Mauro Carvalho Chehab <mchehab@kernel.org> <mchehab@redhat.com>
+Mauro Carvalho Chehab <mchehab@kernel.org> <m.chehab@samsung.com>
+Mauro Carvalho Chehab <mchehab@kernel.org> <mchehab@s-opensource.com>
 Maxime Ripard <mripard@kernel.org> <maxime.ripard@bootlin.com>
 Maxime Ripard <mripard@kernel.org> <maxime.ripard@free-electrons.com>
 Mayuresh Janorkar <mayur@ti.com>
@@ -224,13 +246,13 @@
 Patrick Mochel <mochel@digitalimplant.org>
 Paul Burton <paulburton@kernel.org> <paul.burton@imgtec.com>
 Paul Burton <paulburton@kernel.org> <paul.burton@mips.com>
+Paul E. McKenney <paulmck@kernel.org> <paul.mckenney@linaro.org>
 Paul E. McKenney <paulmck@kernel.org> <paulmck@linux.ibm.com>
 Paul E. McKenney <paulmck@kernel.org> <paulmck@linux.vnet.ibm.com>
-Paul E. McKenney <paulmck@kernel.org> <paul.mckenney@linaro.org>
 Paul E. McKenney <paulmck@kernel.org> <paulmck@us.ibm.com>
 Peter A Jonsson <pj@ludd.ltu.se>
-Peter Oruba <peter@oruba.de>
 Peter Oruba <peter.oruba@amd.com>
+Peter Oruba <peter@oruba.de>
 Pratyush Anand <pratyush.anand@gmail.com> <pratyush.anand@st.com>
 Praveen BP <praveenbp@ti.com>
 Punit Agrawal <punitagrawal@gmail.com> <punit.agrawal@arm.com>
@@ -243,23 +265,23 @@
 Ralf Wildenhues <Ralf.Wildenhues@gmx.de>
 Randy Dunlap <rdunlap@infradead.org> <rdunlap@xenotime.net>
 Rémi Denis-Courmont <rdenis@simphalempin.com>
-Ricardo Ribalda <ribalda@kernel.org> <ricardo.ribalda@gmail.com>
 Ricardo Ribalda <ribalda@kernel.org> <ricardo@ribalda.com>
 Ricardo Ribalda <ribalda@kernel.org> Ricardo Ribalda Delgado <ribalda@kernel.org>
+Ricardo Ribalda <ribalda@kernel.org> <ricardo.ribalda@gmail.com>
 Ross Zwisler <zwisler@kernel.org> <ross.zwisler@linux.intel.com>
 Rudolf Marek <R.Marek@sh.cvut.cz>
 Rui Saraiva <rmps@joel.ist.utl.pt>
 Sachin P Sant <ssant@in.ibm.com>
-Sarangdhar Joshi <spjoshi@codeaurora.org>
+Sakari Ailus <sakari.ailus@linux.intel.com> <sakari.ailus@iki.fi>
 Sam Ravnborg <sam@mars.ravnborg.org>
-Santosh Shilimkar <ssantosh@kernel.org>
 Santosh Shilimkar <santosh.shilimkar@oracle.org>
+Santosh Shilimkar <ssantosh@kernel.org>
+Sarangdhar Joshi <spjoshi@codeaurora.org>
 Sascha Hauer <s.hauer@pengutronix.de>
 S.Çağlar Onur <caglar@pardus.org.tr>
-Sakari Ailus <sakari.ailus@linux.intel.com> <sakari.ailus@iki.fi>
 Sean Nyekjaer <sean@geanix.com> <sean.nyekjaer@prevas.dk>
-Sebastian Reichel <sre@kernel.org> <sre@debian.org>
 Sebastian Reichel <sre@kernel.org> <sebastian.reichel@collabora.co.uk>
+Sebastian Reichel <sre@kernel.org> <sre@debian.org>
 Sedat Dilek <sedat.dilek@gmail.com> <sedat.dilek@credativ.de>
 Shiraz Hashim <shiraz.linux.kernel@gmail.com> <shiraz.hashim@st.com>
 Shuah Khan <shuah@kernel.org> <shuahkhan@gmail.com>
@@ -270,18 +292,21 @@
 Simon Kelley <simon@thekelleys.org.uk>
 Stéphane Witzmann <stephane.witzmann@ubpmes.univ-bpclermont.fr>
 Stephen Hemminger <shemminger@osdl.org>
+Steve Wise <larrystevenwise@gmail.com> <swise@chelsio.com>
+Steve Wise <larrystevenwise@gmail.com> <swise@opengridcomputing.com>
 Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
 Subhash Jadavani <subhashj@codeaurora.org>
 Sudeep Holla <sudeep.holla@arm.com> Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
 Sumit Semwal <sumit.semwal@ti.com>
+Takashi YOSHII <takashi.yoshii.zj@renesas.com>
 Tejun Heo <htejun@gmail.com>
 Thomas Graf <tgraf@suug.ch>
 Thomas Pedersen <twp@codeaurora.org>
 Tiezhu Yang <yangtiezhu@loongson.cn> <kernelpatch@126.com>
 Todor Tomov <todor.too@gmail.com> <todor.tomov@linaro.org>
 Tony Luck <tony.luck@intel.com>
-TripleX Chung <xxx.phy@gmail.com> <zhongyu@18mail.cn>
 TripleX Chung <xxx.phy@gmail.com> <triplex@zh-kernel.org>
+TripleX Chung <xxx.phy@gmail.com> <zhongyu@18mail.cn>
 Tsuneo Yoshioka <Tsuneo.Yoshioka@f-secure.com>
 Uwe Kleine-König <ukleinek@informatik.uni-freiburg.de>
 Uwe Kleine-König <ukl@pengutronix.de>
@@ -290,22 +315,16 @@
 Vinod Koul <vkoul@kernel.org> <vinod.koul@intel.com>
 Vinod Koul <vkoul@kernel.org> <vinod.koul@linux.intel.com>
 Vinod Koul <vkoul@kernel.org> <vkoul@infradead.org>
+Viresh Kumar <vireshk@kernel.org> <viresh.kumar2@arm.com>
 Viresh Kumar <vireshk@kernel.org> <viresh.kumar@st.com>
 Viresh Kumar <vireshk@kernel.org> <viresh.linux@gmail.com>
-Viresh Kumar <vireshk@kernel.org> <viresh.kumar2@arm.com>
 Vivien Didelot <vivien.didelot@gmail.com> <vivien.didelot@savoirfairelinux.com>
 Vlad Dogaru <ddvlad@gmail.com> <vlad.dogaru@intel.com>
-Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@virtuozzo.com>
 Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@parallels.com>
-Takashi YOSHII <takashi.yoshii.zj@renesas.com>
+Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@virtuozzo.com>
+WeiXiong Liao <gmpy.liaowx@gmail.com> <liaoweixiong@allwinnertech.com>
 Will Deacon <will@kernel.org> <will.deacon@arm.com>
-Wolfram Sang <wsa@kernel.org> <wsa@the-dreams.de>
 Wolfram Sang <wsa@kernel.org> <w.sang@pengutronix.de>
+Wolfram Sang <wsa@kernel.org> <wsa@the-dreams.de>
 Yakir Yang <kuankuan.y@gmail.com> <ykk@rock-chips.com>
 Yusuke Goda <goda.yusuke@renesas.com>
-Gustavo Padovan <gustavo@las.ic.unicamp.br>
-Gustavo Padovan <padovan@profusion.mobi>
-Changbin Du <changbin.du@intel.com> <changbin.du@intel.com>
-Changbin Du <changbin.du@intel.com> <changbin.du@gmail.com>
-Steve Wise <larrystevenwise@gmail.com> <swise@chelsio.com>
-Steve Wise <larrystevenwise@gmail.com> <swise@opengridcomputing.com>
diff --git a/CREDITS b/CREDITS
index 0787b5872..32ee70a 100644
--- a/CREDITS
+++ b/CREDITS
@@ -34,7 +34,7 @@
 
 N: Mark Adler
 E: madler@alumni.caltech.edu
-W: http://alumnus.caltech.edu/~madler/
+W: https://alumnus.caltech.edu/~madler/
 D: zlib decompression
 
 N: Monalisa Agrawal
@@ -62,7 +62,7 @@
 
 N: Werner Almesberger
 E: werner@almesberger.net
-W: http://www.almesberger.net/
+W: https://www.almesberger.net/
 D: dosfs, LILO, some fd features, ATM, various other hacks here and there
 S: Buenos Aires
 S: Argentina
@@ -96,7 +96,7 @@
 
 N: Erik Andersen
 E: andersen@codepoet.org
-W: http://www.codepoet.org/
+W: https://www.codepoet.org/
 P: 1024D/30D39057 1BC4 2742 E885 E4DE 9301  0C82 5F9B 643E 30D3 9057
 D: Maintainer of ide-cd and Uniform CD-ROM driver, 
 D: ATAPI CD-Changer support, Major 2.1.x CD-ROM update.
@@ -114,7 +114,7 @@
 
 N: H. Peter Anvin
 E: hpa@zytor.com
-W: http://www.zytor.com/~hpa/
+W: https://www.zytor.com/~hpa/
 P: 2047/2A960705 BA 03 D3 2C 14 A8 A8 BD  1E DF FE 69 EE 35 BD 74
 D: Author of the SYSLINUX boot loader, maintainer of the linux.* news
 D: hierarchy and the Linux Device List; various kernel hacks
@@ -124,7 +124,7 @@
 
 N: Andrea Arcangeli
 E: andrea@suse.de
-W: http://www.kernel.org/pub/linux/kernel/people/andrea/
+W: https://www.kernel.org/pub/linux/kernel/people/andrea/
 P: 1024D/68B9CB43 13D9 8355 295F 4823 7C49  C012 DFA1 686E 68B9 CB43
 P: 1024R/CB4660B9 CC A0 71 81 F4 A0 63 AC  C0 4B 81 1D 8C 15 C8 E5
 D: Parport hacker
@@ -339,7 +339,7 @@
 
 N: Johannes Berg
 E: johannes@sipsolutions.net
-W: http://johannes.sipsolutions.net/
+W: https://johannes.sipsolutions.net/
 P: 4096R/7BF9099A C0EB C440 F6DA 091C 884D  8532 E0F3 73F3 7BF9 099A
 D: powerpc & 802.11 hacker
 
@@ -376,7 +376,7 @@
 
 N: Anton Blanchard
 E: anton@samba.org
-W: http://samba.org/~anton/
+W: https://samba.org/~anton/
 P: 1024/8462A731 4C 55 86 34 44 59 A7 99  2B 97 88 4A 88 9A 0D 97
 D: sun4 port, Sparc hacker
 
@@ -509,7 +509,7 @@
 
 N: Paul Bristow
 E: paul@paulbristow.net
-W: http://paulbristow.net/linux/idefloppy.html
+W: https://paulbristow.net/linux/idefloppy.html
 D: Maintainer of IDE/ATAPI floppy driver
 
 N: Stefano Brivio
@@ -518,7 +518,7 @@
 
 N: Dominik Brodowski
 E: linux@brodo.de
-W: http://www.brodo.de/
+W: https://www.brodo.de/
 P: 1024D/725B37C6  190F 3E77 9C89 3B6D BECD  46EE 67C3 0308 725B 37C6
 D: parts of CPUFreq code, ACPI bugfixes, PCMCIA rewrite, cpufrequtils
 S: Tuebingen, Germany
@@ -865,7 +865,7 @@
 
 N: Todd J. Derr
 E: tjd@fore.com
-W: http://www.wordsmith.org/~tjd
+W: https://www.wordsmith.org/~tjd
 D: Random console hacks and other miscellaneous stuff
 S: 3000 FORE Drive
 S: Warrendale, Pennsylvania 15086
@@ -894,8 +894,8 @@
 
 N: Matt Domsch
 E: Matt_Domsch@dell.com
-W: http://www.dell.com/linux
-W: http://domsch.com/linux
+W: https://www.dell.com/linux
+W: https://domsch.com/linux
 D: Linux/IA-64
 D: Dell PowerEdge server, SCSI layer, misc drivers, and other patches
 S: Dell Inc.
@@ -992,7 +992,7 @@
 
 N: Randy Dunlap
 E: rdunlap@infradead.org
-W: http://www.infradead.org/~rdunlap/
+W: https://www.infradead.org/~rdunlap/
 D: Linux-USB subsystem, USB core/UHCI/printer/storage drivers
 D: x86 SMP, ACPI, bootflag hacking
 D: documentation, builds
@@ -1157,7 +1157,7 @@
 
 N: Jeremy Fitzhardinge
 E: jeremy@goop.org
-W: http://www.goop.org/~jeremy
+W: https://www.goop.org/~jeremy
 D: author of userfs filesystem
 D: Improved mmap and munmap handling
 D: General mm minor tidyups
@@ -1460,7 +1460,7 @@
 
 N: Oliver Hartkopp
 E: oliver.hartkopp@volkswagen.de
-W: http://www.volkswagen.de
+W: https://www.volkswagen.de
 D: Controller Area Network (network layer core)
 S: Brieffach 1776
 S: 38436 Wolfsburg
@@ -1599,13 +1599,13 @@
 
 N: Kenji Hollis
 E: kenji@bitgate.com
-W: http://www.bitgate.com/
+W: https://www.bitgate.com/
 D: Berkshire PC Watchdog Driver
 D: Small/Industrial Driver Project
 
 N: Nick Holloway
 E: Nick.Holloway@pyrites.org.uk
-W: http://www.pyrites.org.uk/
+W: https://www.pyrites.org.uk/
 P: 1024/36115A04 F4E1 3384 FCFD C055 15D6  BA4C AB03 FBF8 3611 5A04
 D: Occasional Linux hacker...
 S: (ask for current address)
@@ -1655,7 +1655,7 @@
 
 N: Harald Hoyer
 E: harald@redhat.com
-W: http://www.harald-hoyer.de
+W: https://www.harald-hoyer.de
 D: ip_masq_quake
 D: md boot support
 S: Am Strand 5
@@ -1856,7 +1856,7 @@
 D: Author of the COSA/SRP sync serial board driver.
 D: Port of the syncppp.c from the 2.0 to the 2.1 kernel.
 P: 1024/D3498839 0D 99 A7 FB 20 66 05 D7  8B 35 FC DE 05 B1 8A 5E
-W: http://www.fi.muni.cz/~kas/
+W: https://www.fi.muni.cz/~kas/
 S: c/o Faculty of Informatics, Masaryk University
 S: Botanicka' 68a
 S: 602 00 Brno
@@ -2017,7 +2017,7 @@
 
 N: Gene Kozin
 E: 74604.152@compuserve.com
-W: http://www.sangoma.com
+W: https://www.sangoma.com
 D: WAN Router & Sangoma WAN drivers
 S: Sangoma Technologies Inc.
 S: 7170 Warden Avenue, Unit 2
@@ -2112,7 +2112,7 @@
 
 N: Jaroslav Kysela
 E: perex@perex.cz
-W: http://www.perex.cz
+W: https://www.perex.cz
 D: Original Author and Maintainer for HP 10/100 Mbit Network Adapters
 D: ISA PnP
 S: Sindlovy Dvory 117
@@ -2316,7 +2316,7 @@
 
 N: Daniel J. Maas
 E: dmaas@dcine.com
-W: http://www.maasdigital.com
+W: https://www.maasdigital.com
 D: dv1394
 
 N: Hamish Macdonald
@@ -2647,7 +2647,7 @@
 
 N: Paul Moore
 E: paul@paul-moore.com
-W: http://www.paul-moore.com
+W: https://www.paul-moore.com
 D: NetLabel, SELinux, audit
 
 N: James Morris
@@ -2786,7 +2786,7 @@
 E: niemi@tux.org
 W: http://www.tux.org/~niemi/
 D: Assistant maintainer of Mtools, fdutils, and floppy driver
-D: Administrator of Tux.Org Linux Server, http://www.tux.org
+D: Administrator of Tux.Org Linux Server, https://www.tux.org
 S: 2364 Old Trail Drive
 S: Reston, Virginia 20191
 S: USA
@@ -2850,7 +2850,7 @@
 
 N: Mikulas Patocka
 E: mikulas@artax.karlin.mff.cuni.cz
-W: http://artax.karlin.mff.cuni.cz/~mikulas/
+W: https://artax.karlin.mff.cuni.cz/~mikulas/
 P: 1024/BB11D2D5 A0 F1 28 4A C4 14 1E CF  92 58 7A 8F 69 BC A4 D3
 D: Read/write HPFS filesystem
 S: Weissova 8
@@ -2872,7 +2872,7 @@
 
 N: Barak A. Pearlmutter
 E: bap@cs.unm.edu
-W: http://www.cs.unm.edu/~bap/
+W: https://www.cs.unm.edu/~bap/
 P: 512/602D785D 9B A1 83 CD EE CB AD 93  20 C6 4C B7 F5 E9 60 D4
 D: Author of mark-and-sweep GC integrated by Alan Cox
 S: Computer Science Department
@@ -3035,7 +3035,7 @@
 
 N: Daniel Quinlan
 E: quinlan@pathname.com
-W: http://www.pathname.com/~quinlan/
+W: https://www.pathname.com/~quinlan/
 D: FSSTND coordinator; FHS editor
 D: random Linux documentation, patches, and hacks
 S: 4390 Albany Drive #41A
@@ -3130,7 +3130,7 @@
 
 N: Rik van Riel
 E: riel@redhat.com
-W: http://www.surriel.com/
+W: https://www.surriel.com/
 D: Linux-MM site, Documentation/admin-guide/sysctl/*, swap/mm readaround
 D: kswapd fixes, random kernel hacker, rmap VM,
 D: nl.linux.org administrator, minor scheduler additions
@@ -3246,7 +3246,7 @@
 
 N: Paul `Rusty' Russell
 E: rusty@rustcorp.com.au
-W: http://ozlabs.org/~rusty
+W: https://ozlabs.org/~rusty
 D: Ruggedly handsome.
 D: netfilter, ipchains with Michael Neuling.
 S: 52 Moore St
@@ -3369,7 +3369,7 @@
 
 N: Robert Schwebel
 E: robert@schwebel.de
-W: http://www.schwebel.de
+W: https://www.schwebel.de
 D: Embedded hacker and book author,
 D: AMD Elan support for Linux
 S: Pengutronix
@@ -3545,7 +3545,7 @@
 N: Henrik Storner
 E: storner@image.dk
 W: http://www.image.dk/~storner/
-W: http://www.sslug.dk/
+W: https://www.sslug.dk/
 D: Configure script: Invented tristate for module-configuration
 D: vfat/msdos integration, kerneld docs, Linux promotion
 D: Miscellaneous bug-fixes
@@ -3579,7 +3579,7 @@
 
 N: Eugene Surovegin
 E: ebs@ebshome.net
-W: http://kernel.ebshome.net/
+W: https://kernel.ebshome.net/
 P: 1024D/AE5467F1 FF22 39F1 6728 89F6 6E6C  2365 7602 F33D AE54 67F1
 D: Embedded PowerPC 4xx: EMAC, I2C, PIC and random hacks/fixes
 S: Sunnyvale, California 94085
@@ -3609,7 +3609,7 @@
 
 N: Urs Thuermann
 E: urs.thuermann@volkswagen.de
-W: http://www.volkswagen.de
+W: https://www.volkswagen.de
 D: Controller Area Network (network layer core)
 S: Brieffach 1776
 S: 38436 Wolfsburg
@@ -3656,7 +3656,7 @@
 
 N: Andrew Tridgell
 E: tridge@samba.org
-W: http://samba.org/tridge/
+W: https://samba.org/tridge/
 D: dosemu, networking, samba
 S: 3 Ballow Crescent
 S: MacGregor A.C.T 2615
@@ -3894,7 +3894,7 @@
 N: David Weinehall
 E: tao@acc.umu.se
 P: 1024D/DC47CA16 7ACE 0FB0 7A74 F994 9B36  E1D1 D14E 8526 DC47 CA16
-W: http://www.acc.umu.se/~tao/
+W: https://www.acc.umu.se/~tao/
 D: v2.0 kernel maintainer
 D: Fixes for the NE/2-driver
 D: Miscellaneous MCA-support
@@ -3919,7 +3919,7 @@
 N: Harald Welte
 E: laforge@netfilter.org
 P: 1024D/30F48BFF DBDE 6912 8831 9A53 879B  9190 5DA5 C655 30F4 8BFF
-W: http://gnumonks.org/users/laforge
+W: https://gnumonks.org/users/laforge
 D: netfilter: new nat helper infrastructure
 D: netfilter: ULOG, ECN, DSCP target
 D: netfilter: TTL match
diff --git a/Documentation/ABI/stable/sysfs-driver-dma-idxd b/Documentation/ABI/stable/sysfs-driver-dma-idxd
index b5bebf6..1af9c41 100644
--- a/Documentation/ABI/stable/sysfs-driver-dma-idxd
+++ b/Documentation/ABI/stable/sysfs-driver-dma-idxd
@@ -1,47 +1,47 @@
-What:		sys/bus/dsa/devices/dsa<m>/version
+What:		/sys/bus/dsa/devices/dsa<m>/version
 Date:		Apr 15, 2020
 KernelVersion:	5.8.0
 Contact:	dmaengine@vger.kernel.org
 Description:	The hardware version number.
 
-What:           sys/bus/dsa/devices/dsa<m>/cdev_major
+What:           /sys/bus/dsa/devices/dsa<m>/cdev_major
 Date:           Oct 25, 2019
-KernelVersion: 	5.6.0
+KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:	The major number that the character device driver assigned to
 		this device.
 
-What:           sys/bus/dsa/devices/dsa<m>/errors
+What:           /sys/bus/dsa/devices/dsa<m>/errors
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The error information for this device.
 
-What:           sys/bus/dsa/devices/dsa<m>/max_batch_size
+What:           /sys/bus/dsa/devices/dsa<m>/max_batch_size
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The largest number of work descriptors in a batch.
 
-What:           sys/bus/dsa/devices/dsa<m>/max_work_queues_size
+What:           /sys/bus/dsa/devices/dsa<m>/max_work_queues_size
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The maximum work queue size supported by this device.
 
-What:           sys/bus/dsa/devices/dsa<m>/max_engines
+What:           /sys/bus/dsa/devices/dsa<m>/max_engines
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The maximum number of engines supported by this device.
 
-What:           sys/bus/dsa/devices/dsa<m>/max_groups
+What:           /sys/bus/dsa/devices/dsa<m>/max_groups
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The maximum number of groups can be created under this device.
 
-What:           sys/bus/dsa/devices/dsa<m>/max_tokens
+What:           /sys/bus/dsa/devices/dsa<m>/max_tokens
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
@@ -50,7 +50,7 @@
 		implementation, and these resources are allocated by engines to
 		support operations.
 
-What:           sys/bus/dsa/devices/dsa<m>/max_transfer_size
+What:           /sys/bus/dsa/devices/dsa<m>/max_transfer_size
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
@@ -58,57 +58,57 @@
 		perform the operation. The maximum transfer size is dependent on
 		the workqueue the descriptor was submitted to.
 
-What:           sys/bus/dsa/devices/dsa<m>/max_work_queues
+What:           /sys/bus/dsa/devices/dsa<m>/max_work_queues
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The maximum work queue number that this device supports.
 
-What:           sys/bus/dsa/devices/dsa<m>/numa_node
+What:           /sys/bus/dsa/devices/dsa<m>/numa_node
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The numa node number for this device.
 
-What:           sys/bus/dsa/devices/dsa<m>/op_cap
+What:           /sys/bus/dsa/devices/dsa<m>/op_cap
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The operation capability bit mask specify the operation types
 		supported by the this device.
 
-What:           sys/bus/dsa/devices/dsa<m>/state
+What:           /sys/bus/dsa/devices/dsa<m>/state
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The state information of this device. It can be either enabled
 		or disabled.
 
-What:           sys/bus/dsa/devices/dsa<m>/group<m>.<n>
+What:           /sys/bus/dsa/devices/dsa<m>/group<m>.<n>
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The assigned group under this device.
 
-What:           sys/bus/dsa/devices/dsa<m>/engine<m>.<n>
+What:           /sys/bus/dsa/devices/dsa<m>/engine<m>.<n>
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The assigned engine under this device.
 
-What:           sys/bus/dsa/devices/dsa<m>/wq<m>.<n>
+What:           /sys/bus/dsa/devices/dsa<m>/wq<m>.<n>
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The assigned work queue under this device.
 
-What:           sys/bus/dsa/devices/dsa<m>/configurable
+What:           /sys/bus/dsa/devices/dsa<m>/configurable
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    To indicate if this device is configurable or not.
 
-What:           sys/bus/dsa/devices/dsa<m>/token_limit
+What:           /sys/bus/dsa/devices/dsa<m>/token_limit
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
@@ -116,19 +116,19 @@
 		one time by operations that access low bandwidth memory in the
 		device.
 
-What:           sys/bus/dsa/devices/wq<m>.<n>/group_id
+What:           /sys/bus/dsa/devices/wq<m>.<n>/group_id
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The group id that this work queue belongs to.
 
-What:           sys/bus/dsa/devices/wq<m>.<n>/size
+What:           /sys/bus/dsa/devices/wq<m>.<n>/size
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The work queue size for this work queue.
 
-What:           sys/bus/dsa/devices/wq<m>.<n>/type
+What:           /sys/bus/dsa/devices/wq<m>.<n>/type
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
@@ -136,20 +136,20 @@
 		queue usages in the kernel space or "user" type for work queue
 		usages by applications in user space.
 
-What:           sys/bus/dsa/devices/wq<m>.<n>/cdev_minor
+What:           /sys/bus/dsa/devices/wq<m>.<n>/cdev_minor
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The minor number assigned to this work queue by the character
 		device driver.
 
-What:           sys/bus/dsa/devices/wq<m>.<n>/mode
+What:           /sys/bus/dsa/devices/wq<m>.<n>/mode
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The work queue mode type for this work queue.
 
-What:           sys/bus/dsa/devices/wq<m>.<n>/priority
+What:           /sys/bus/dsa/devices/wq<m>.<n>/priority
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
@@ -157,20 +157,20 @@
 		other work queue in the same group to control quality of service
 		for dispatching work from multiple workqueues in the same group.
 
-What:           sys/bus/dsa/devices/wq<m>.<n>/state
+What:           /sys/bus/dsa/devices/wq<m>.<n>/state
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The current state of the work queue.
 
-What:           sys/bus/dsa/devices/wq<m>.<n>/threshold
+What:           /sys/bus/dsa/devices/wq<m>.<n>/threshold
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
 Description:    The number of entries in this work queue that may be filled
 		via a limited portal.
 
-What:           sys/bus/dsa/devices/engine<m>.<n>/group_id
+What:           /sys/bus/dsa/devices/engine<m>.<n>/group_id
 Date:           Oct 25, 2019
 KernelVersion:  5.6.0
 Contact:        dmaengine@vger.kernel.org
diff --git a/Documentation/ABI/stable/sysfs-driver-mlxreg-io b/Documentation/ABI/stable/sysfs-driver-mlxreg-io
index b0d90cc..fd9a804 100644
--- a/Documentation/ABI/stable/sysfs-driver-mlxreg-io
+++ b/Documentation/ABI/stable/sysfs-driver-mlxreg-io
@@ -206,3 +206,20 @@
 		regulator devices.
 
 		The file is read only.
+
+What:		/sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/cpld1_pn
+What:		/sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/cpld2_pn
+What:		/sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/cpld3_pn
+What:		/sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/cpld4_pn
+What:		/sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/cpld1_version_min
+What:		/sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/cpld2_version_min
+What:		/sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/cpld3_version_min
+What:		/sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/cpld4_version_min
+Date:		July 2020
+KernelVersion:	5.9
+Contact:	Vadim Pasternak <vadimpmellanox.com>
+Description:	These files show with which CPLD part numbers and minor
+		versions have been burned CPLD devices equipped on a
+		system.
+
+		The files are read only.
diff --git a/drivers/staging/speakup/sysfs-driver-speakup b/Documentation/ABI/stable/sysfs-driver-speakup
similarity index 100%
rename from drivers/staging/speakup/sysfs-driver-speakup
rename to Documentation/ABI/stable/sysfs-driver-speakup
diff --git a/Documentation/ABI/testing/debugfs-turris-mox-rwtm b/Documentation/ABI/testing/debugfs-turris-mox-rwtm
new file mode 100644
index 0000000..2b3255e
--- /dev/null
+++ b/Documentation/ABI/testing/debugfs-turris-mox-rwtm
@@ -0,0 +1,9 @@
+What:		/sys/kernel/debug/turris-mox-rwtm/do_sign
+Date:		Jun 2020
+KernelVersion:	5.8
+Contact:	Marek Behún <marek.behun@nic.cz>
+Description:	(W) Message to sign with the ECDSA private key stored in
+		    device's OTP. The message must be exactly 64 bytes (since
+		    this is intended for SHA-512 hashes).
+		(R) The resulting signature, 136 bytes. This contains the R and
+		    S values of the ECDSA signature, both in big-endian format.
diff --git a/Documentation/ABI/testing/dev-kmsg b/Documentation/ABI/testing/dev-kmsg
index f307506e..3c0bb76 100644
--- a/Documentation/ABI/testing/dev-kmsg
+++ b/Documentation/ABI/testing/dev-kmsg
@@ -56,6 +56,17 @@
 		  seek after the last record available at the time
 		  the last SYSLOG_ACTION_CLEAR was issued.
 
+		Other seek operations or offsets are not supported because of
+		the special behavior this device has. The device allows to read
+		or write only whole variable length messages (records) that are
+		stored in a ring buffer.
+
+		Because of the non-standard behavior also the error values are
+		non-standard. -ESPIPE is returned for non-zero offset. -EINVAL
+		is returned for other operations, e.g. SEEK_CUR. This behavior
+		and values are historical and could not be modified without the
+		risk of breaking userspace.
+
 		The output format consists of a prefix carrying the syslog
 		prefix including priority and facility, the 64 bit message
 		sequence number and the monotonic timestamp in microseconds,
diff --git a/Documentation/ABI/testing/sysfs-block b/Documentation/ABI/testing/sysfs-block
index ed8c14f..2322eb7 100644
--- a/Documentation/ABI/testing/sysfs-block
+++ b/Documentation/ABI/testing/sysfs-block
@@ -273,6 +273,24 @@
 		device ("host-aware" or "host-managed" zone model). For regular
 		block devices, the value is always 0.
 
+What:		/sys/block/<disk>/queue/max_active_zones
+Date:		July 2020
+Contact:	Niklas Cassel <niklas.cassel@wdc.com>
+Description:
+		For zoned block devices (zoned attribute indicating
+		"host-managed" or "host-aware"), the sum of zones belonging to
+		any of the zone states: EXPLICIT OPEN, IMPLICIT OPEN or CLOSED,
+		is limited by this value. If this value is 0, there is no limit.
+
+What:		/sys/block/<disk>/queue/max_open_zones
+Date:		July 2020
+Contact:	Niklas Cassel <niklas.cassel@wdc.com>
+Description:
+		For zoned block devices (zoned attribute indicating
+		"host-managed" or "host-aware"), the sum of zones belonging to
+		any of the zone states: EXPLICIT OPEN or IMPLICIT OPEN,
+		is limited by this value. If this value is 0, there is no limit.
+
 What:		/sys/block/<disk>/queue/chunk_sectors
 Date:		September 2016
 Contact:	Hannes Reinecke <hare@suse.com>
diff --git a/Documentation/ABI/testing/sysfs-bus-event_source-devices-hv_24x7 b/Documentation/ABI/testing/sysfs-bus-event_source-devices-hv_24x7
index e8698af..e82fc37 100644
--- a/Documentation/ABI/testing/sysfs-bus-event_source-devices-hv_24x7
+++ b/Documentation/ABI/testing/sysfs-bus-event_source-devices-hv_24x7
@@ -43,6 +43,13 @@
 		This sysfs interface exposes the number of cores per chip
 		present in the system.
 
+What:		/sys/devices/hv_24x7/cpumask
+Date:		July 2020
+Contact:	Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org>
+Description:	read only
+		This sysfs file exposes the cpumask which is designated to make
+		HCALLs to retrieve hv-24x7 pmu event counter data.
+
 What:		/sys/bus/event_source/devices/hv_24x7/event_descs/<event-name>
 Date:		February 2014
 Contact:	Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org>
diff --git a/Documentation/ABI/testing/sysfs-bus-iio b/Documentation/ABI/testing/sysfs-bus-iio
index d3e53a6..5c62bfb 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio
+++ b/Documentation/ABI/testing/sysfs-bus-iio
@@ -1569,7 +1569,8 @@
 KernelVersion:	4.3
 Contact:	linux-iio@vger.kernel.org
 Description:
-		Raw (unscaled no offset etc.) percentage reading of a substance.
+		Raw (unscaled no offset etc.) reading of a substance. Units
+		after application of scale and offset are percents.
 
 What:		/sys/bus/iio/devices/iio:deviceX/in_resistance_raw
 What:		/sys/bus/iio/devices/iio:deviceX/in_resistanceX_raw
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-icm42600 b/Documentation/ABI/testing/sysfs-bus-iio-icm42600
new file mode 100644
index 0000000..0bf1fd4
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-bus-iio-icm42600
@@ -0,0 +1,20 @@
+What:		/sys/bus/iio/devices/iio:deviceX/in_accel_x_calibbias
+What:		/sys/bus/iio/devices/iio:deviceX/in_accel_y_calibbias
+What:		/sys/bus/iio/devices/iio:deviceX/in_accel_z_calibbias
+What:		/sys/bus/iio/devices/iio:deviceX/in_anglvel_x_calibbias
+What:		/sys/bus/iio/devices/iio:deviceX/in_anglvel_y_calibbias
+What:		/sys/bus/iio/devices/iio:deviceX/in_anglvel_z_calibbias
+KernelVersion:  5.8
+Contact:        linux-iio@vger.kernel.org
+Description:
+		Hardware applied calibration offset (assumed to fix production
+		inaccuracies). Values represent a real physical offset expressed
+		in SI units (m/s^2 for accelerometer and rad/s for gyroscope).
+
+What:		/sys/bus/iio/devices/iio:deviceX/in_accel_calibbias_available
+What:		/sys/bus/iio/devices/iio:deviceX/in_anglvel_calibbias_available
+KernelVersion:  5.8
+Contact:        linux-iio@vger.kernel.org
+Description:
+		Range of available values for hardware offset. Values in SI
+		units (m/s^2 for accelerometer and rad/s for gyroscope).
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-scd30 b/Documentation/ABI/testing/sysfs-bus-iio-scd30
new file mode 100644
index 0000000..b9712f3
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-bus-iio-scd30
@@ -0,0 +1,34 @@
+What:		/sys/bus/iio/devices/iio:deviceX/calibration_auto_enable
+Date:		June 2020
+KernelVersion:	5.8
+Contact:	linux-iio@vger.kernel.org
+Description:
+		Contaminants build-up in the measurement chamber or optical
+		elements deterioration leads to sensor drift.
+
+		One can compensate for sensor drift by using automatic self
+		calibration procedure (asc).
+
+		Writing 1 or 0 to this attribute will respectively activate or
+		deactivate asc.
+
+		Upon reading current asc status is returned.
+
+What:		/sys/bus/iio/devices/iio:deviceX/calibration_forced_value
+Date:		June 2020
+KernelVersion:	5.8
+Contact:	linux-iio@vger.kernel.org
+Description:
+		Contaminants build-up in the measurement chamber or optical
+		elements deterioration leads to sensor drift.
+
+		One can compensate for sensor drift by using forced
+		recalibration (frc). This is useful in case there's known
+		co2 reference available nearby the sensor.
+
+		Picking value from the range [400 1 2000] and writing it to the
+		sensor will set frc.
+
+		Upon reading current frc value is returned. Note that after
+		power cycling default value (i.e 400) is returned even though
+		internally sensor had recalibrated itself.
diff --git a/Documentation/ABI/testing/sysfs-bus-nfit b/Documentation/ABI/testing/sysfs-bus-nfit
index a1cb44d..e4f76e7 100644
--- a/Documentation/ABI/testing/sysfs-bus-nfit
+++ b/Documentation/ABI/testing/sysfs-bus-nfit
@@ -202,6 +202,25 @@
 		functions. See the section named 'NVDIMM Root Device _DSMs' in
 		the ACPI specification.
 
+What:		/sys/bus/nd/devices/ndbusX/nfit/firmware_activate_noidle
+Date:		Apr, 2020
+KernelVersion:	v5.8
+Contact:	linux-nvdimm@lists.01.org
+Description:
+		(RW) The Intel platform implementation of firmware activate
+		support exposes an option let the platform force idle devices in
+		the system over the activation event, or trust that the OS will
+		do it. The safe default is to let the platform force idle
+		devices since the kernel is already in a suspend state, and on
+		the chance that a driver does not properly quiesce bus-mastering
+		after a suspend callback the platform will handle it.  However,
+		the activation might abort if, for example, platform firmware
+		determines that the activation time exceeds the max PCI-E
+		completion timeout. Since the platform does not know whether the
+		OS is running the activation from a suspend context it aborts,
+		but if the system owner trusts driver suspend callback to be
+		sufficient then 'firmware_activation_noidle' can be
+		enabled to bypass the activation abort.
 
 What:		/sys/bus/nd/devices/regionX/nfit/range_index
 Date:		Jun, 2015
diff --git a/Documentation/ABI/testing/sysfs-bus-nvdimm b/Documentation/ABI/testing/sysfs-bus-nvdimm
new file mode 100644
index 0000000..d643802
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-bus-nvdimm
@@ -0,0 +1,2 @@
+The libnvdimm sub-system implements a common sysfs interface for
+platform nvdimm resources. See Documentation/driver-api/nvdimm/.
diff --git a/Documentation/ABI/testing/sysfs-bus-optee-devices b/Documentation/ABI/testing/sysfs-bus-optee-devices
new file mode 100644
index 0000000..0f58701
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-bus-optee-devices
@@ -0,0 +1,8 @@
+What:		/sys/bus/tee/devices/optee-ta-<uuid>/
+Date:           May 2020
+KernelVersion   5.8
+Contact:        op-tee@lists.trustedfirmware.org
+Description:
+		OP-TEE bus provides reference to registered drivers under this directory. The <uuid>
+		matches Trusted Application (TA) driver and corresponding TA in secure OS. Drivers
+		are free to create needed API under optee-ta-<uuid> directory.
diff --git a/Documentation/ABI/testing/sysfs-bus-papr-pmem b/Documentation/ABI/testing/sysfs-bus-papr-pmem
index 5b10d03..c1a6727 100644
--- a/Documentation/ABI/testing/sysfs-bus-papr-pmem
+++ b/Documentation/ABI/testing/sysfs-bus-papr-pmem
@@ -25,3 +25,30 @@
 				  NVDIMM have been scrubbed.
 		* "locked"	: Indicating that NVDIMM contents cant
 				  be modified until next power cycle.
+
+What:		/sys/bus/nd/devices/nmemX/papr/perf_stats
+Date:		May, 2020
+KernelVersion:	v5.9
+Contact:	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, linux-nvdimm@lists.01.org,
+Description:
+		(RO) Report various performance stats related to papr-scm NVDIMM
+		device.  Each stat is reported on a new line with each line
+		composed of a stat-identifier followed by it value. Below are
+		currently known dimm performance stats which are reported:
+
+		* "CtlResCt" : Controller Reset Count
+		* "CtlResTm" : Controller Reset Elapsed Time
+		* "PonSecs " : Power-on Seconds
+		* "MemLife " : Life Remaining
+		* "CritRscU" : Critical Resource Utilization
+		* "HostLCnt" : Host Load Count
+		* "HostSCnt" : Host Store Count
+		* "HostSDur" : Host Store Duration
+		* "HostLDur" : Host Load Duration
+		* "MedRCnt " : Media Read Count
+		* "MedWCnt " : Media Write Count
+		* "MedRDur " : Media Read Duration
+		* "MedWDur " : Media Write Duration
+		* "CchRHCnt" : Cache Read Hit Count
+		* "CchWHCnt" : Cache Write Hit Count
+		* "FastWCnt" : Fast Write Count
\ No newline at end of file
diff --git a/Documentation/ABI/testing/sysfs-bus-platform b/Documentation/ABI/testing/sysfs-bus-platform
index 5172a61..194ca70 100644
--- a/Documentation/ABI/testing/sysfs-bus-platform
+++ b/Documentation/ABI/testing/sysfs-bus-platform
@@ -18,3 +18,13 @@
 		devices to opt-out of driver binding using a driver_override
 		name such as "none".  Only a single driver may be specified in
 		the override, there is no support for parsing delimiters.
+
+What:		/sys/bus/platform/devices/.../numa_node
+Date:		June 2020
+Contact:	Barry Song <song.bao.hua@hisilicon.com>
+Description:
+		This file contains the NUMA node to which the platform device
+		is attached. It won't be visible if the node is unknown. The
+		value comes from an ACPI _PXM method or a similar firmware
+		source. Initial users for this file would be devices like
+		arm smmu which are populated by arm64 acpi_iort.
diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt
index 82e80de..dd565c3 100644
--- a/Documentation/ABI/testing/sysfs-bus-thunderbolt
+++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt
@@ -178,11 +178,18 @@
 Contact:	thunderbolt-software@lists.01.org
 Description:	When new NVM image is written to the non-active NVM
 		area (through non_activeX NVMem device), the
-		authentication procedure is started by writing 1 to
-		this file. If everything goes well, the device is
+		authentication procedure is started by writing to
+		this file.
+		If everything goes well, the device is
 		restarted with the new NVM firmware. If the image
 		verification fails an error code is returned instead.
 
+		This file will accept writing values "1" or "2"
+		- Writing "1" will flush the image to the storage
+		area and authenticate the image in one action.
+		- Writing "2" will run some basic validation on the image
+		and flush it to the storage area.
+
 		When read holds status of the last authentication
 		operation if an error occurred during the process. This
 		is directly the status value from the DMA configuration
@@ -236,3 +243,49 @@
 Contact:	thunderbolt-software@lists.01.org
 Description:	This contains XDomain service specific settings as
 		bitmask. Format: %x
+
+What:		/sys/bus/thunderbolt/devices/<device>:<port>.<index>/device
+Date:		Oct 2020
+KernelVersion:	v5.9
+Contact:	Mika Westerberg <mika.westerberg@linux.intel.com>
+Description:	Retimer device identifier read from the hardware.
+
+What:		/sys/bus/thunderbolt/devices/<device>:<port>.<index>/nvm_authenticate
+Date:		Oct 2020
+KernelVersion:	v5.9
+Contact:	Mika Westerberg <mika.westerberg@linux.intel.com>
+Description:	When new NVM image is written to the non-active NVM
+		area (through non_activeX NVMem device), the
+		authentication procedure is started by writing 1 to
+		this file. If everything goes well, the device is
+		restarted with the new NVM firmware. If the image
+		verification fails an error code is returned instead.
+
+		When read holds status of the last authentication
+		operation if an error occurred during the process.
+		Format: %x.
+
+What:		/sys/bus/thunderbolt/devices/<device>:<port>.<index>/nvm_version
+Date:		Oct 2020
+KernelVersion:	v5.9
+Contact:	Mika Westerberg <mika.westerberg@linux.intel.com>
+Description:	Holds retimer NVM version number. Format: %x.%x, major.minor.
+
+What:		/sys/bus/thunderbolt/devices/<device>:<port>.<index>/vendor
+Date:		Oct 2020
+KernelVersion:	v5.9
+Contact:	Mika Westerberg <mika.westerberg@linux.intel.com>
+Description:	Retimer vendor identifier read from the hardware.
+
+What:		/sys/bus/thunderbolt/devices/.../nvm_authenticate_on_disconnect
+Date:		Oct 2020
+KernelVersion:	v5.9
+Contact:	Mario Limonciello <mario.limonciello@dell.com>
+Description:	For supported devices, automatically authenticate the new Thunderbolt
+		image when the device is disconnected from the host system.
+
+		This file will accept writing values "1" or "2"
+		- Writing "1" will flush the image to the storage
+		area and prepare the device for authentication on disconnect.
+		- Writing "2" will run some basic validation on the image
+		and flush it to the storage area.
diff --git a/Documentation/ABI/testing/sysfs-class-devfreq b/Documentation/ABI/testing/sysfs-class-devfreq
index 9758eb8..deefffb 100644
--- a/Documentation/ABI/testing/sysfs-class-devfreq
+++ b/Documentation/ABI/testing/sysfs-class-devfreq
@@ -108,3 +108,15 @@
 		frequency requested by governors and min_freq.
 		The max_freq overrides min_freq because max_freq may be
 		used to throttle devices to avoid overheating.
+
+What:		/sys/class/devfreq/.../timer
+Date:		July 2020
+Contact:	Chanwoo Choi <cw00.choi@samsung.com>
+Description:
+		This ABI shows and stores the kind of work timer by users.
+		This work timer is used by devfreq workqueue in order to
+		monitor the device status such as utilization. The user
+		can change the work timer on runtime according to their demand
+		as following:
+			echo deferrable > /sys/class/devfreq/.../timer
+			echo delayed > /sys/class/devfreq/.../timer
diff --git a/Documentation/ABI/testing/sysfs-class-devlink b/Documentation/ABI/testing/sysfs-class-devlink
new file mode 100644
index 0000000..64791b6
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-class-devlink
@@ -0,0 +1,126 @@
+What:		/sys/class/devlink/.../
+Date:		May 2020
+Contact:	Saravana Kannan <saravanak@google.com>
+Description:
+		Provide a place in sysfs for the device link objects in the
+		kernel at any given time.  The name of a device link directory,
+		denoted as ... above, is of the form <supplier>--<consumer>
+		where <supplier> is the supplier device name and <consumer> is
+		the consumer device name.
+
+What:		/sys/class/devlink/.../auto_remove_on
+Date:		May 2020
+Contact:	Saravana Kannan <saravanak@google.com>
+Description:
+		This file indicates if the device link will ever be
+		automatically removed by the driver core when the consumer and
+		supplier devices themselves are still present.
+
+		This will be one of the following strings:
+
+		'consumer unbind'
+		'supplier unbind'
+		'never'
+
+		'consumer unbind' means the device link will be removed when
+		the consumer's driver is unbound from the consumer device.
+
+		'supplier unbind' means the device link will be removed when
+		the supplier's driver is unbound from the supplier device.
+
+		'never' means the device link will not be automatically removed
+		when as long as the supplier and consumer devices themselves
+		are still present.
+
+What:		/sys/class/devlink/.../consumer
+Date:		May 2020
+Contact:	Saravana Kannan <saravanak@google.com>
+Description:
+		This file is a symlink to the consumer device's sysfs directory.
+
+What:		/sys/class/devlink/.../runtime_pm
+Date:		May 2020
+Contact:	Saravana Kannan <saravanak@google.com>
+Description:
+		This file indicates if the device link has any impact on the
+		runtime power management behavior of the consumer and supplier
+		devices. For example: Making sure the supplier doesn't enter
+		runtime suspend while the consumer is active.
+
+		This will be one of the following strings:
+
+		'0' - Does not affect runtime power management
+		'1' - Affects runtime power management
+
+What:		/sys/class/devlink/.../status
+Date:		May 2020
+Contact:	Saravana Kannan <saravanak@google.com>
+Description:
+		This file indicates the status of the device link. The status
+		of a device link is affected by whether the supplier and
+		consumer devices have been bound to their corresponding
+		drivers. The status of a device link also affects the binding
+		and unbinding of the supplier and consumer devices with their
+		drivers and also affects whether the software state of the
+		supplier device is synced with the hardware state of the
+		supplier device after boot up.
+		See also: sysfs-devices-state_synced.
+
+		This will be one of the following strings:
+
+		'not tracked'
+		'dormant'
+		'available'
+		'consumer probing'
+		'active'
+		'supplier unbinding'
+		'unknown'
+
+		'not tracked' means this device link does not track the status
+		and has no impact on the binding, unbinding and syncing the
+		hardware and software device state.
+
+		'dormant' means the supplier and the consumer devices have not
+		bound to their driver.
+
+		'available' means the supplier has bound to its driver and is
+		available to supply resources to the consumer device.
+
+		'consumer probing' means the consumer device is currently
+		trying to bind to its driver.
+
+		'active' means the supplier and consumer devices have both
+		bound successfully to their drivers.
+
+		'supplier unbinding' means the supplier devices is currently in
+		the process of unbinding from its driver.
+
+		'unknown' means the state of the device link is not any of the
+		above. If this is ever the value, there's a bug in the kernel.
+
+What:		/sys/class/devlink/.../supplier
+Date:		May 2020
+Contact:	Saravana Kannan <saravanak@google.com>
+Description:
+		This file is a symlink to the supplier device's sysfs directory.
+
+What:		/sys/class/devlink/.../sync_state_only
+Date:		May 2020
+Contact:	Saravana Kannan <saravanak@google.com>
+Description:
+		This file indicates if the device link is limited to only
+		affecting the syncing of the hardware and software state of the
+		supplier device.
+
+		This will be one of the following strings:
+
+		'0'
+		'1' - Affects runtime power management
+
+		'0' means the device link can affect other device behaviors
+		like binding/unbinding, suspend/resume, runtime power
+		management, etc.
+
+		'1' means the device link will only affect the syncing of
+		hardware and software state of the supplier device after boot
+		up and doesn't not affect other behaviors of the devices.
diff --git a/Documentation/ABI/testing/sysfs-class-led-driver-turris-omnia b/Documentation/ABI/testing/sysfs-class-led-driver-turris-omnia
new file mode 100644
index 0000000..795a5de
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-class-led-driver-turris-omnia
@@ -0,0 +1,14 @@
+What:		/sys/class/leds/<led>/device/brightness
+Date:		July 2020
+KernelVersion:	5.9
+Contact:	Marek Behún <marek.behun@nic.cz>
+Description:	(RW) On the front panel of the Turris Omnia router there is also
+		a button which can be used to control the intensity of all the
+		LEDs at once, so that if they are too bright, user can dim them.
+
+		The microcontroller cycles between 8 levels of this global
+		brightness (from 100% to 0%), but this setting can have any
+		integer value between 0 and 100. It is therefore convenient to be
+		able to change this setting from software.
+
+		Format: %i
diff --git a/Documentation/ABI/testing/sysfs-class-led-multicolor b/Documentation/ABI/testing/sysfs-class-led-multicolor
new file mode 100644
index 0000000..eeeddcb
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-class-led-multicolor
@@ -0,0 +1,35 @@
+What:		/sys/class/leds/<led>/brightness
+Date:		March 2020
+KernelVersion:	5.9
+Contact:	Dan Murphy <dmurphy@ti.com>
+Description:	read/write
+		Writing to this file will update all LEDs within the group to a
+		calculated percentage of what each color LED intensity is set
+		to. The percentage is calculated for each grouped LED via the
+		equation below:
+
+		led_brightness = brightness * multi_intensity/max_brightness
+
+		For additional details please refer to
+		Documentation/leds/leds-class-multicolor.rst.
+
+		The value of the LED is from 0 to
+		/sys/class/leds/<led>/max_brightness.
+
+What:		/sys/class/leds/<led>/multi_index
+Date:		March 2020
+KernelVersion:	5.9
+Contact:	Dan Murphy <dmurphy@ti.com>
+Description:	read
+		The multi_index array, when read, will output the LED colors
+		as an array of strings as they are indexed in the
+		multi_intensity file.
+
+What:		/sys/class/leds/<led>/multi_intensity
+Date:		March 2020
+KernelVersion:	5.9
+Contact:	Dan Murphy <dmurphy@ti.com>
+Description:	read/write
+		This file contains array of integers. Order of components is
+		described by the multi_index array. The maximum intensity should
+		not exceed /sys/class/leds/<led>/max_brightness.
diff --git a/Documentation/ABI/testing/sysfs-class-mei b/Documentation/ABI/testing/sysfs-class-mei
index e9dc110..5c52372 100644
--- a/Documentation/ABI/testing/sysfs-class-mei
+++ b/Documentation/ABI/testing/sysfs-class-mei
@@ -90,3 +90,16 @@
 		The ME FW writes Glitch Detection HW (TRC)
 		status information into trc status register
 		for BIOS and OS to monitor fw health.
+
+What:		/sys/class/mei/meiN/kind
+Date:		Jul 2020
+KernelVersion:	5.8
+Contact:	Tomas Winkler <tomas.winkler@intel.com>
+Description:	Display kind of the device
+
+		Generic devices are marked as "mei"
+		while special purpose have their own
+		names.
+		Available options:
+		- mei:  generic mei device.
+		- itouch:  itouch (ipts) mei device.
diff --git a/Documentation/ABI/testing/sysfs-class-ocxl b/Documentation/ABI/testing/sysfs-class-ocxl
index b5b1fa1..ae1276e 100644
--- a/Documentation/ABI/testing/sysfs-class-ocxl
+++ b/Documentation/ABI/testing/sysfs-class-ocxl
@@ -33,3 +33,14 @@
 Contact:	linuxppc-dev@lists.ozlabs.org
 Description:	read/write
 		Give access the global mmio area for the AFU
+
+What:		/sys/class/ocxl/<afu name>/reload_on_reset
+Date:		February 2020
+Contact:	linuxppc-dev@lists.ozlabs.org
+Description:	read/write
+		Control whether the FPGA is reloaded on a link reset. Enabled
+		through a vendor-specific logic block on the FPGA.
+			0	Do not reload FPGA image from flash
+			1	Reload FPGA image from flash
+			unavailable
+				The device does not support this capability
diff --git a/Documentation/ABI/testing/sysfs-class-power b/Documentation/ABI/testing/sysfs-class-power
index 216d61a..40213c7 100644
--- a/Documentation/ABI/testing/sysfs-class-power
+++ b/Documentation/ABI/testing/sysfs-class-power
@@ -205,7 +205,8 @@
 		Valid values: "Unknown", "Good", "Overheat", "Dead",
 			      "Over voltage", "Unspecified failure", "Cold",
 			      "Watchdog timer expire", "Safety timer expire",
-			      "Over current", "Calibration required"
+			      "Over current", "Calibration required", "Warm",
+			      "Cool", "Hot"
 
 What:		/sys/class/power_supply/<supply_name>/precharge_current
 Date:		June 2017
diff --git a/Documentation/ABI/testing/sysfs-class-power-wilco b/Documentation/ABI/testing/sysfs-class-power-wilco
index da1d6ff..84fde1d 100644
--- a/Documentation/ABI/testing/sysfs-class-power-wilco
+++ b/Documentation/ABI/testing/sysfs-class-power-wilco
@@ -14,6 +14,10 @@
 			Charging begins when level drops below
 			charge_control_start_threshold, and ceases when
 			level is above charge_control_end_threshold.
+		Long Life: Customized charge rate for last longer battery life.
+			On Wilco device this mode is pre-configured in the factory
+			through EC's private PID. Swiching to a different mode will
+			be denied by Wilco EC when Long Life mode is enabled.
 
 What:		/sys/class/power_supply/wilco-charger/charge_control_start_threshold
 Date:		April 2019
diff --git a/Documentation/ABI/testing/sysfs-devices-consumer b/Documentation/ABI/testing/sysfs-devices-consumer
new file mode 100644
index 0000000..1f06d74
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-devices-consumer
@@ -0,0 +1,8 @@
+What:		/sys/devices/.../consumer:<consumer>
+Date:		May 2020
+Contact:	Saravana Kannan <saravanak@google.com>
+Description:
+		The /sys/devices/.../consumer:<consumer> are symlinks to device
+		links where this device is the supplier. <consumer> denotes the
+		name of the consumer in that device link. There can be zero or
+		more of these symlinks for a given device.
diff --git a/Documentation/ABI/testing/sysfs-devices-mapping b/Documentation/ABI/testing/sysfs-devices-mapping
new file mode 100644
index 0000000..490ccfd
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-devices-mapping
@@ -0,0 +1,33 @@
+What:           /sys/devices/uncore_iio_x/dieX
+Date:           February 2020
+Contact:        Roman Sudarikov <roman.sudarikov@linux.intel.com>
+Description:
+                Each IIO stack (PCIe root port) has its own IIO PMON block, so
+                each dieX file (where X is die number) holds "Segment:Root Bus"
+                for PCIe root port, which can be monitored by that IIO PMON
+                block.
+                For example, on 4-die Xeon platform with up to 6 IIO stacks per
+                die and, therefore, 6 IIO PMON blocks per die, the mapping of
+                IIO PMON block 0 exposes as the following:
+
+                $ ls /sys/devices/uncore_iio_0/die*
+                -r--r--r-- /sys/devices/uncore_iio_0/die0
+                -r--r--r-- /sys/devices/uncore_iio_0/die1
+                -r--r--r-- /sys/devices/uncore_iio_0/die2
+                -r--r--r-- /sys/devices/uncore_iio_0/die3
+
+                $ tail /sys/devices/uncore_iio_0/die*
+                ==> /sys/devices/uncore_iio_0/die0 <==
+                0000:00
+                ==> /sys/devices/uncore_iio_0/die1 <==
+                0000:40
+                ==> /sys/devices/uncore_iio_0/die2 <==
+                0000:80
+                ==> /sys/devices/uncore_iio_0/die3 <==
+                0000:c0
+
+                Which means:
+                IIO PMU 0 on die 0 belongs to PCI RP on bus 0x00, domain 0x0000
+                IIO PMU 0 on die 1 belongs to PCI RP on bus 0x40, domain 0x0000
+                IIO PMU 0 on die 2 belongs to PCI RP on bus 0x80, domain 0x0000
+                IIO PMU 0 on die 3 belongs to PCI RP on bus 0xc0, domain 0x0000
diff --git a/Documentation/ABI/testing/sysfs-devices-platform-stratix10-rsu b/Documentation/ABI/testing/sysfs-devices-platform-stratix10-rsu
index ae9af98..a8daceb 100644
--- a/Documentation/ABI/testing/sysfs-devices-platform-stratix10-rsu
+++ b/Documentation/ABI/testing/sysfs-devices-platform-stratix10-rsu
@@ -126,3 +126,39 @@
 			1	no action
 			0	firmware record the notify code defined
 				in b[15:0].
+
+What:		/sys/devices/platform/stratix10-rsu.0/dcmf0
+Date:		June 2020
+KernelVersion:	5.8
+Contact:	Richard Gong <richard.gong@linux.intel.com>
+Description:
+		(RO) Decision firmware copy 0 version information.
+
+What:		/sys/devices/platform/stratix10-rsu.0/dcmf1
+Date:		June 2020
+KernelVersion:	5.8
+Contact:	Richard Gong <richard.gong@linux.intel.com>
+Description:
+		(RO) Decision firmware copy 1 version information.
+
+What:		/sys/devices/platform/stratix10-rsu.0/dcmf2
+Date:		June 2020
+KernelVersion:	5.8
+Contact:	Richard Gong <richard.gong@linux.intel.com>
+Description:
+		(RO) Decision firmware copy 2 version information.
+
+What:		/sys/devices/platform/stratix10-rsu.0/dcmf3
+Date:		June 2020
+KernelVersion:	5.8
+Contact:	Richard Gong <richard.gong@linux.intel.com>
+Description:
+		(RO) Decision firmware copy 3 version information.
+
+What:		/sys/devices/platform/stratix10-rsu.0/max_retry
+Date:		June 2020
+KernelVersion:	5.8
+Contact:	Richard Gong <richard.gong@linux.intel.com>
+Description:
+		(RO) max retry parameter is stored in the firmware
+		decision IO section, as a byte located at offset 0x18c.
diff --git a/Documentation/ABI/testing/sysfs-devices-soc b/Documentation/ABI/testing/sysfs-devices-soc
index ba3a3fa..ea999e2 100644
--- a/Documentation/ABI/testing/sysfs-devices-soc
+++ b/Documentation/ABI/testing/sysfs-devices-soc
@@ -26,6 +26,30 @@
 		Read-only attribute common to all SoCs. Contains SoC family name
 		(e.g. DB8500).
 
+		On many of ARM based silicon with SMCCC v1.2+ compliant firmware
+		this will contain the JEDEC JEP106 manufacturer’s identification
+		code. The format is "jep106:XXYY" where XX is identity code and
+		YY is continuation code.
+
+		This manufacturer’s identification code is defined by one
+		or more eight (8) bit fields, each consisting of seven (7)
+		data bits plus one (1) odd parity bit. It is a single field,
+		limiting the possible number of vendors to 126. To expand
+		the maximum number of identification codes, a continuation
+		scheme has been defined.
+
+		The specified mechanism is that an identity code of 0x7F
+		represents the "continuation code" and implies the presence
+		of an additional identity code field, and this mechanism
+		may be extended to multiple continuation codes followed
+		by the manufacturer's identity code.
+
+		For example, ARM has identity code 0x7F 0x7F 0x7F 0x7F 0x3B,
+		which is code 0x3B on the fifth 'page'. This is shortened
+		as JEP106 identity code of 0x3B and a continuation code of
+		0x4 to represent the four continuation codes preceding the
+		identity code.
+
 What:		/sys/devices/socX/serial_number
 Date:		January 2019
 contact:	Bjorn Andersson <bjorn.andersson@linaro.org>
@@ -40,6 +64,12 @@
 		Read-only attribute supported by most SoCs. In the case of
 		ST-Ericsson's chips this contains the SoC serial number.
 
+		On many of ARM based silicon with SMCCC v1.2+ compliant firmware
+		this will contain the SOC ID appended to the family attribute
+		to ensure there is no conflict in this namespace across various
+		vendors. The format is "jep106:XXYY:ZZZZ" where XX is identity
+		code, YY is continuation code and ZZZZ is the SOC ID.
+
 What:		/sys/devices/socX/revision
 Date:		January 2012
 contact:	Lee Jones <lee.jones@linaro.org>
diff --git a/Documentation/ABI/testing/sysfs-devices-state_synced b/Documentation/ABI/testing/sysfs-devices-state_synced
new file mode 100644
index 0000000..0c922d7
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-devices-state_synced
@@ -0,0 +1,24 @@
+What:		/sys/devices/.../state_synced
+Date:		May 2020
+Contact:	Saravana Kannan <saravanak@google.com>
+Description:
+		The /sys/devices/.../state_synced attribute is only present for
+		devices whose bus types or driver provides the .sync_state()
+		callback. The number read from it (0 or 1) reflects the value
+		of the device's 'state_synced' field. A value of 0 means the
+		.sync_state() callback hasn't been called yet. A value of 1
+		means the .sync_state() callback has been called.
+
+		Generally, if a device has sync_state() support and has some of
+		the resources it provides enabled at the time the kernel starts
+		(Eg: enabled by hardware reset or bootloader or anything that
+		run before the kernel starts), then it'll keep those resources
+		enabled and in a state that's compatible with the state they
+		were in at the start of the kernel. The device will stop doing
+		this only when the sync_state() callback has been called --
+		which happens only when all its consumer devices are registered
+		and have probed successfully. Resources that were left disabled
+		at the time the kernel starts are not affected or limited in
+		any way by sync_state() callbacks.
+
+
diff --git a/Documentation/ABI/testing/sysfs-devices-supplier b/Documentation/ABI/testing/sysfs-devices-supplier
new file mode 100644
index 0000000..a919e0d
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-devices-supplier
@@ -0,0 +1,8 @@
+What:		/sys/devices/.../supplier:<supplier>
+Date:		May 2020
+Contact:	Saravana Kannan <saravanak@google.com>
+Description:
+		The /sys/devices/.../supplier:<supplier> are symlinks to device
+		links where this device is the consumer. <supplier> denotes the
+		name of the supplier in that device link. There can be zero or
+		more of these symlinks for a given device.
diff --git a/Documentation/ABI/testing/sysfs-devices-waiting_for_supplier b/Documentation/ABI/testing/sysfs-devices-waiting_for_supplier
new file mode 100644
index 0000000..59d073d
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-devices-waiting_for_supplier
@@ -0,0 +1,17 @@
+What:		/sys/devices/.../waiting_for_supplier
+Date:		May 2020
+Contact:	Saravana Kannan <saravanak@google.com>
+Description:
+		The /sys/devices/.../waiting_for_supplier attribute is only
+		present when fw_devlink kernel command line option is enabled
+		and is set to something stricter than "permissive".  It is
+		removed once a device probes successfully (because the
+		information is no longer relevant). The number read from it (0
+		or 1) reflects whether the device is waiting for one or more
+		suppliers to be added and then linked to using device links
+		before the device can probe.
+
+		A value of 0 means the device is not waiting for any suppliers
+		to be added before it can probe.  A value of 1 means the device
+		is waiting for one or more suppliers to be added before it can
+		probe.
diff --git a/Documentation/ABI/testing/sysfs-driver-input-exc3000 b/Documentation/ABI/testing/sysfs-driver-input-exc3000
new file mode 100644
index 0000000..3d316d5
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-driver-input-exc3000
@@ -0,0 +1,15 @@
+What:		/sys/bus/i2c/devices/xxx/fw_version
+Date:		Aug 2020
+Contact:	linux-input@vger.kernel.org
+Description:    Reports the firmware version provided by the touchscreen, for example "00_T6" on a EXC80H60
+
+		Access: Read
+		Valid values: Represented as string
+
+What:		/sys/bus/i2c/devices/xxx/model
+Date:		Aug 2020
+Contact:	linux-input@vger.kernel.org
+Description:    Reports the model identification provided by the touchscreen, for example "Orion_1320" on a EXC80H60
+
+		Access: Read
+		Valid values: Represented as string
diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs
index 016724e..d1a3521 100644
--- a/Documentation/ABI/testing/sysfs-driver-ufs
+++ b/Documentation/ABI/testing/sysfs-driver-ufs
@@ -883,3 +883,139 @@
 Description:	This entry shows the target state of an UFS UIC link
 		for the chosen system power management level.
 		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_presv_us_en
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows if preserve user-space was configured
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_shared_alloc_units
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows the shared allocated units of WB buffer
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_type
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows the configured WB type.
+		0x1 for shared buffer mode. 0x0 for dedicated buffer mode.
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_buff_cap_adj
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows the total user-space decrease in shared
+		buffer mode.
+		The value of this parameter is 3 for TLC NAND when SLC mode
+		is used as WriteBooster Buffer. 2 for MLC NAND.
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_max_alloc_units
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows the Maximum total WriteBooster Buffer size
+		which is supported by the entire device.
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_max_wb_luns
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows the maximum number of luns that can support
+		WriteBooster.
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_sup_red_type
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	The supportability of user space reduction mode
+		and preserve user space mode.
+		00h: WriteBooster Buffer can be configured only in
+		user space reduction type.
+		01h: WriteBooster Buffer can be configured only in
+		preserve user space type.
+		02h: Device can be configured in either user space
+		reduction type or preserve user space type.
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_sup_wb_type
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	The supportability of WriteBooster Buffer type.
+		00h: LU based WriteBooster Buffer configuration
+		01h: Single shared WriteBooster Buffer
+		configuration
+		02h: Supporting both LU based WriteBooster
+		Buffer and Single shared WriteBooster Buffer
+		configuration
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/flags/wb_enable
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows the status of WriteBooster.
+		0: WriteBooster is not enabled.
+		1: WriteBooster is enabled
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/flags/wb_flush_en
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows if flush is enabled.
+		0: Flush operation is not performed.
+		1: Flush operation is performed.
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/flags/wb_flush_during_h8
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	Flush WriteBooster Buffer during hibernate state.
+		0: Device is not allowed to flush the
+		WriteBooster Buffer during link hibernate
+		state.
+		1: Device is allowed to flush the
+		WriteBooster Buffer during link hibernate
+		state
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/attributes/wb_avail_buf
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows the amount of unused WriteBooster buffer
+		available.
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/attributes/wb_cur_buf
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows the amount of unused current buffer.
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/attributes/wb_flush_status
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows the flush operation status.
+		00h: idle
+		01h: Flush operation in progress
+		02h: Flush operation stopped prematurely.
+		03h: Flush operation completed successfully
+		04h: Flush operation general failure
+		The file is read only.
+
+What:		/sys/bus/platform/drivers/ufshcd/*/attributes/wb_life_time_est
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows an indication of the WriteBooster Buffer
+		lifetime based on the amount of performed program/erase cycles
+		01h: 0% - 10% WriteBooster Buffer life time used
+		...
+		0Ah: 90% - 100% WriteBooster Buffer life time used
+		The file is read only.
+
+What:		/sys/class/scsi_device/*/device/unit_descriptor/wb_buf_alloc_units
+Date:		June 2020
+Contact:	Asutosh Das <asutoshd@codeaurora.org>
+Description:	This entry shows the configured size of WriteBooster buffer.
+		0400h corresponds to 4GB.
+		The file is read only.
diff --git a/Documentation/ABI/testing/sysfs-driver-w1_therm b/Documentation/ABI/testing/sysfs-driver-w1_therm
index 076659d..9b488c0 100644
--- a/Documentation/ABI/testing/sysfs-driver-w1_therm
+++ b/Documentation/ABI/testing/sysfs-driver-w1_therm
@@ -8,7 +8,7 @@
 		to device min/max capabilities. Values are integer as they are
 		stored in a 8bit register in the device. Lowest value is
 		automatically put to TL. Once set, alarms could be search at
-		master level, refer to Documentation/w1/w1_generic.rst for
+		master level, refer to Documentation/w1/w1-generic.rst for
 		detailed information
 Users:		any user space application which wants to communicate with
 		w1_term device
diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
index 4bb93a0..7f730c4 100644
--- a/Documentation/ABI/testing/sysfs-fs-f2fs
+++ b/Documentation/ABI/testing/sysfs-fs-f2fs
@@ -229,7 +229,9 @@
 Contact:	"Jaegeuk Kim" <jaegeuk@kernel.org>
 Description:	Do background GC agressively when set. When gc_urgent = 1,
 		background thread starts to do GC by given gc_urgent_sleep_time
-		interval. It is set to 0 by default.
+		interval. When gc_urgent = 2, F2FS will lower the bar of
+		checking idle in order to process outstanding discard commands
+		and GC a little bit aggressively. It is set to 0 by default.
 
 What:		/sys/fs/f2fs/<disk>/gc_urgent_sleep_time
 Date:		August 2017
diff --git a/Documentation/PCI/endpoint/function/binding/pci-test.rst b/Documentation/PCI/endpoint/function/binding/pci-test.rst
new file mode 100644
index 0000000..57ee866
--- /dev/null
+++ b/Documentation/PCI/endpoint/function/binding/pci-test.rst
@@ -0,0 +1,26 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==========================
+PCI Test Endpoint Function
+==========================
+
+name: Should be "pci_epf_test" to bind to the pci_epf_test driver.
+
+Configurable Fields:
+
+================   ===========================================================
+vendorid	   should be 0x104c
+deviceid	   should be 0xb500 for DRA74x and 0xb501 for DRA72x
+revid		   don't care
+progif_code	   don't care
+subclass_code	   don't care
+baseclass_code	   should be 0xff
+cache_line_size	   don't care
+subsys_vendor_id   don't care
+subsys_id	   don't care
+interrupt_pin	   Should be 1 - INTA, 2 - INTB, 3 - INTC, 4 -INTD
+msi_interrupts	   Should be 1 to 32 depending on the number of MSI interrupts
+		   to test
+msix_interrupts	   Should be 1 to 2048 depending on the number of MSI-X
+		   interrupts to test
+================   ===========================================================
diff --git a/Documentation/PCI/endpoint/function/binding/pci-test.txt b/Documentation/PCI/endpoint/function/binding/pci-test.txt
deleted file mode 100644
index cd76ba4..0000000
--- a/Documentation/PCI/endpoint/function/binding/pci-test.txt
+++ /dev/null
@@ -1,19 +0,0 @@
-PCI TEST ENDPOINT FUNCTION
-
-name: Should be "pci_epf_test" to bind to the pci_epf_test driver.
-
-Configurable Fields:
-vendorid	 : should be 0x104c
-deviceid	 : should be 0xb500 for DRA74x and 0xb501 for DRA72x
-revid		 : don't care
-progif_code	 : don't care
-subclass_code	 : don't care
-baseclass_code	 : should be 0xff
-cache_line_size	 : don't care
-subsys_vendor_id : don't care
-subsys_id	 : don't care
-interrupt_pin	 : Should be 1 - INTA, 2 - INTB, 3 - INTC, 4 -INTD
-msi_interrupts	 : Should be 1 to 32 depending on the number of MSI interrupts
-		   to test
-msix_interrupts	 : Should be 1 to 2048 depending on the number of MSI-X
-		   interrupts to test
diff --git a/Documentation/PCI/endpoint/index.rst b/Documentation/PCI/endpoint/index.rst
index d114ea7..4ca7439 100644
--- a/Documentation/PCI/endpoint/index.rst
+++ b/Documentation/PCI/endpoint/index.rst
@@ -11,3 +11,5 @@
    pci-endpoint-cfs
    pci-test-function
    pci-test-howto
+
+   function/binding/pci-test
diff --git a/Documentation/PCI/endpoint/pci-endpoint-cfs.rst b/Documentation/PCI/endpoint/pci-endpoint-cfs.rst
index b6d39cde..1bbd81e 100644
--- a/Documentation/PCI/endpoint/pci-endpoint-cfs.rst
+++ b/Documentation/PCI/endpoint/pci-endpoint-cfs.rst
@@ -24,7 +24,7 @@
 
 The pci_ep configfs has two directories at its root: controllers and
 functions. Every EPC device present in the system will have an entry in
-the *controllers* directory and and every EPF driver present in the system
+the *controllers* directory and every EPF driver present in the system
 will have an entry in the *functions* directory.
 ::
 
diff --git a/Documentation/PCI/endpoint/pci-endpoint.rst b/Documentation/PCI/endpoint/pci-endpoint.rst
index 7536be44..4f5622a 100644
--- a/Documentation/PCI/endpoint/pci-endpoint.rst
+++ b/Documentation/PCI/endpoint/pci-endpoint.rst
@@ -214,7 +214,7 @@
 * pci_epf_create()
 
    Create a new PCI EPF device by passing the name of the PCI EPF device.
-   This name will be used to bind the the EPF device to a EPF driver.
+   This name will be used to bind the EPF device to a EPF driver.
 
 * pci_epf_destroy()
 
diff --git a/Documentation/PCI/pci-error-recovery.rst b/Documentation/PCI/pci-error-recovery.rst
index 13beee2..84ceebb 100644
--- a/Documentation/PCI/pci-error-recovery.rst
+++ b/Documentation/PCI/pci-error-recovery.rst
@@ -79,7 +79,7 @@
 
 	struct pci_error_handlers
 	{
-		int (*error_detected)(struct pci_dev *dev, enum pci_channel_state);
+		int (*error_detected)(struct pci_dev *dev, pci_channel_state_t);
 		int (*mmio_enabled)(struct pci_dev *dev);
 		int (*slot_reset)(struct pci_dev *dev);
 		void (*resume)(struct pci_dev *dev);
@@ -87,11 +87,11 @@
 
 The possible channel states are::
 
-	enum pci_channel_state {
+	typedef enum {
 		pci_channel_io_normal,  /* I/O channel is in normal state */
 		pci_channel_io_frozen,  /* I/O to channel is blocked */
 		pci_channel_io_perm_failure, /* PCI card is dead */
-	};
+	} pci_channel_state_t;
 
 Possible return values are::
 
@@ -248,7 +248,7 @@
 ------------------
 
 In response to a return value of PCI_ERS_RESULT_NEED_RESET, the
-the platform will perform a slot reset on the requesting PCI device(s).
+platform will perform a slot reset on the requesting PCI device(s).
 The actual steps taken by a platform to perform a slot reset
 will be platform-dependent. Upon completion of slot reset, the
 platform will call the device slot_reset() callback.
@@ -348,7 +348,7 @@
 -------------------------
 A "permanent failure" has occurred, and the platform cannot recover
 the device.  The platform will call error_detected() with a
-pci_channel_state value of pci_channel_io_perm_failure.
+pci_channel_state_t value of pci_channel_io_perm_failure.
 
 The device driver should, at this point, assume the worst. It should
 cancel all pending I/O, refuse all new I/O, returning -EIO to
diff --git a/Documentation/PCI/pci.rst b/Documentation/PCI/pci.rst
index 8c016d8..814b40f 100644
--- a/Documentation/PCI/pci.rst
+++ b/Documentation/PCI/pci.rst
@@ -17,7 +17,7 @@
 A more complete resource is the third edition of "Linux Device Drivers"
 by Jonathan Corbet, Alessandro Rubini, and Greg Kroah-Hartman.
 LDD3 is available for free (under Creative Commons License) from:
-http://lwn.net/Kernel/LDD3/.
+https://lwn.net/Kernel/LDD3/.
 
 However, keep in mind that all documents are subject to "bit rot".
 Refer to the source code if things are not working as described here.
@@ -209,12 +209,12 @@
    OS BUG: we don't check resource allocations before enabling those
    resources. The sequence would make more sense if we called
    pci_request_resources() before calling pci_enable_device().
-   Currently, the device drivers can't detect the bug when when two
+   Currently, the device drivers can't detect the bug when two
    devices have been allocated the same range. This is not a common
    problem and unlikely to get fixed soon.
 
    This has been discussed before but not changed as of 2.6.19:
-   http://lkml.org/lkml/2006/3/2/194
+   https://lore.kernel.org/r/20060302180025.GC28895@flint.arm.linux.org.uk/
 
 
 pci_set_master() will enable DMA by setting the bus master bit
@@ -265,7 +265,7 @@
 ---------------------
 .. note::
    If anything below doesn't make sense, please refer to
-   Documentation/DMA-API.txt. This section is just a reminder that
+   :doc:`/core-api/dma-api`. This section is just a reminder that
    drivers need to indicate DMA capabilities of the device and is not
    an authoritative source for DMA interfaces.
 
@@ -291,7 +291,7 @@
 Setup shared control data
 -------------------------
 Once the DMA masks are set, the driver can allocate "consistent" (a.k.a. shared)
-memory.  See Documentation/DMA-API.txt for a full description of
+memory.  See :doc:`/core-api/dma-api` for a full description of
 the DMA APIs. This section is just a reminder that it needs to be done
 before enabling DMA on the device.
 
@@ -421,7 +421,7 @@
 
 Then clean up "consistent" buffers which contain the control data.
 
-See Documentation/DMA-API.txt for details on unmapping interfaces.
+See :doc:`/core-api/dma-api` for details on unmapping interfaces.
 
 
 Unregister from other subsystems
@@ -514,9 +514,8 @@
 The device IDs are arbitrary hex numbers (vendor controlled) and normally used
 only in a single location, the pci_device_id table.
 
-Please DO submit new vendor/device IDs to http://pci-ids.ucw.cz/.
-There are mirrors of the pci.ids file at http://pciids.sourceforge.net/
-and https://github.com/pciutils/pciids.
+Please DO submit new vendor/device IDs to https://pci-ids.ucw.cz/.
+There's a mirror of the pci.ids file at https://github.com/pciutils/pciids.
 
 
 Obsolete functions
diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst b/Documentation/RCU/Design/Requirements/Requirements.rst
index 75b8ca0..8f41ad0 100644
--- a/Documentation/RCU/Design/Requirements/Requirements.rst
+++ b/Documentation/RCU/Design/Requirements/Requirements.rst
@@ -463,7 +463,7 @@
 This guarantee was only partially premeditated. DYNIX/ptx used an
 explicit memory barrier for publication, but had nothing resembling
 ``rcu_dereference()`` for subscription, nor did it have anything
-resembling the ``smp_read_barrier_depends()`` that was later subsumed
+resembling the dependency-ordering barrier that was later subsumed
 into ``rcu_dereference()`` and later still into ``READ_ONCE()``. The
 need for these operations made itself known quite suddenly at a
 late-1990s meeting with the DEC Alpha architects, back in the days when
@@ -2583,7 +2583,12 @@
 would need to be instructions following ``rcu_read_unlock()``. Although
 ``synchronize_rcu()`` would guarantee that execution reached the
 ``rcu_read_unlock()``, it would not be able to guarantee that execution
-had completely left the trampoline.
+had completely left the trampoline. Worse yet, in some situations
+the trampoline's protection must extend a few instructions *prior* to
+execution reaching the trampoline.  For example, these few instructions
+might calculate the address of the trampoline, so that entering the
+trampoline would be pre-ordained a surprisingly long time before execution
+actually reached the trampoline itself.
 
 The solution, in the form of `Tasks
 RCU <https://lwn.net/Articles/607117/>`__, is to have implicit read-side
diff --git a/Documentation/RCU/checklist.rst b/Documentation/RCU/checklist.rst
new file mode 100644
index 0000000..2efed99
--- /dev/null
+++ b/Documentation/RCU/checklist.rst
@@ -0,0 +1,465 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+================================
+Review Checklist for RCU Patches
+================================
+
+
+This document contains a checklist for producing and reviewing patches
+that make use of RCU.  Violating any of the rules listed below will
+result in the same sorts of problems that leaving out a locking primitive
+would cause.  This list is based on experiences reviewing such patches
+over a rather long period of time, but improvements are always welcome!
+
+0.	Is RCU being applied to a read-mostly situation?  If the data
+	structure is updated more than about 10% of the time, then you
+	should strongly consider some other approach, unless detailed
+	performance measurements show that RCU is nonetheless the right
+	tool for the job.  Yes, RCU does reduce read-side overhead by
+	increasing write-side overhead, which is exactly why normal uses
+	of RCU will do much more reading than updating.
+
+	Another exception is where performance is not an issue, and RCU
+	provides a simpler implementation.  An example of this situation
+	is the dynamic NMI code in the Linux 2.6 kernel, at least on
+	architectures where NMIs are rare.
+
+	Yet another exception is where the low real-time latency of RCU's
+	read-side primitives is critically important.
+
+	One final exception is where RCU readers are used to prevent
+	the ABA problem (https://en.wikipedia.org/wiki/ABA_problem)
+	for lockless updates.  This does result in the mildly
+	counter-intuitive situation where rcu_read_lock() and
+	rcu_read_unlock() are used to protect updates, however, this
+	approach provides the same potential simplifications that garbage
+	collectors do.
+
+1.	Does the update code have proper mutual exclusion?
+
+	RCU does allow -readers- to run (almost) naked, but -writers- must
+	still use some sort of mutual exclusion, such as:
+
+	a.	locking,
+	b.	atomic operations, or
+	c.	restricting updates to a single task.
+
+	If you choose #b, be prepared to describe how you have handled
+	memory barriers on weakly ordered machines (pretty much all of
+	them -- even x86 allows later loads to be reordered to precede
+	earlier stores), and be prepared to explain why this added
+	complexity is worthwhile.  If you choose #c, be prepared to
+	explain how this single task does not become a major bottleneck on
+	big multiprocessor machines (for example, if the task is updating
+	information relating to itself that other tasks can read, there
+	by definition can be no bottleneck).  Note that the definition
+	of "large" has changed significantly:  Eight CPUs was "large"
+	in the year 2000, but a hundred CPUs was unremarkable in 2017.
+
+2.	Do the RCU read-side critical sections make proper use of
+	rcu_read_lock() and friends?  These primitives are needed
+	to prevent grace periods from ending prematurely, which
+	could result in data being unceremoniously freed out from
+	under your read-side code, which can greatly increase the
+	actuarial risk of your kernel.
+
+	As a rough rule of thumb, any dereference of an RCU-protected
+	pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(),
+	rcu_read_lock_sched(), or by the appropriate update-side lock.
+	Disabling of preemption can serve as rcu_read_lock_sched(), but
+	is less readable and prevents lockdep from detecting locking issues.
+
+	Letting RCU-protected pointers "leak" out of an RCU read-side
+	critical section is every bid as bad as letting them leak out
+	from under a lock.  Unless, of course, you have arranged some
+	other means of protection, such as a lock or a reference count
+	-before- letting them out of the RCU read-side critical section.
+
+3.	Does the update code tolerate concurrent accesses?
+
+	The whole point of RCU is to permit readers to run without
+	any locks or atomic operations.  This means that readers will
+	be running while updates are in progress.  There are a number
+	of ways to handle this concurrency, depending on the situation:
+
+	a.	Use the RCU variants of the list and hlist update
+		primitives to add, remove, and replace elements on
+		an RCU-protected list.	Alternatively, use the other
+		RCU-protected data structures that have been added to
+		the Linux kernel.
+
+		This is almost always the best approach.
+
+	b.	Proceed as in (a) above, but also maintain per-element
+		locks (that are acquired by both readers and writers)
+		that guard per-element state.  Of course, fields that
+		the readers refrain from accessing can be guarded by
+		some other lock acquired only by updaters, if desired.
+
+		This works quite well, also.
+
+	c.	Make updates appear atomic to readers.	For example,
+		pointer updates to properly aligned fields will
+		appear atomic, as will individual atomic primitives.
+		Sequences of operations performed under a lock will -not-
+		appear to be atomic to RCU readers, nor will sequences
+		of multiple atomic primitives.
+
+		This can work, but is starting to get a bit tricky.
+
+	d.	Carefully order the updates and the reads so that
+		readers see valid data at all phases of the update.
+		This is often more difficult than it sounds, especially
+		given modern CPUs' tendency to reorder memory references.
+		One must usually liberally sprinkle memory barriers
+		(smp_wmb(), smp_rmb(), smp_mb()) through the code,
+		making it difficult to understand and to test.
+
+		It is usually better to group the changing data into
+		a separate structure, so that the change may be made
+		to appear atomic by updating a pointer to reference
+		a new structure containing updated values.
+
+4.	Weakly ordered CPUs pose special challenges.  Almost all CPUs
+	are weakly ordered -- even x86 CPUs allow later loads to be
+	reordered to precede earlier stores.  RCU code must take all of
+	the following measures to prevent memory-corruption problems:
+
+	a.	Readers must maintain proper ordering of their memory
+		accesses.  The rcu_dereference() primitive ensures that
+		the CPU picks up the pointer before it picks up the data
+		that the pointer points to.  This really is necessary
+		on Alpha CPUs.	If you don't believe me, see:
+
+			http://www.openvms.compaq.com/wizard/wiz_2637.html
+
+		The rcu_dereference() primitive is also an excellent
+		documentation aid, letting the person reading the
+		code know exactly which pointers are protected by RCU.
+		Please note that compilers can also reorder code, and
+		they are becoming increasingly aggressive about doing
+		just that.  The rcu_dereference() primitive therefore also
+		prevents destructive compiler optimizations.  However,
+		with a bit of devious creativity, it is possible to
+		mishandle the return value from rcu_dereference().
+		Please see rcu_dereference.txt in this directory for
+		more information.
+
+		The rcu_dereference() primitive is used by the
+		various "_rcu()" list-traversal primitives, such
+		as the list_for_each_entry_rcu().  Note that it is
+		perfectly legal (if redundant) for update-side code to
+		use rcu_dereference() and the "_rcu()" list-traversal
+		primitives.  This is particularly useful in code that
+		is common to readers and updaters.  However, lockdep
+		will complain if you access rcu_dereference() outside
+		of an RCU read-side critical section.  See lockdep.txt
+		to learn what to do about this.
+
+		Of course, neither rcu_dereference() nor the "_rcu()"
+		list-traversal primitives can substitute for a good
+		concurrency design coordinating among multiple updaters.
+
+	b.	If the list macros are being used, the list_add_tail_rcu()
+		and list_add_rcu() primitives must be used in order
+		to prevent weakly ordered machines from misordering
+		structure initialization and pointer planting.
+		Similarly, if the hlist macros are being used, the
+		hlist_add_head_rcu() primitive is required.
+
+	c.	If the list macros are being used, the list_del_rcu()
+		primitive must be used to keep list_del()'s pointer
+		poisoning from inflicting toxic effects on concurrent
+		readers.  Similarly, if the hlist macros are being used,
+		the hlist_del_rcu() primitive is required.
+
+		The list_replace_rcu() and hlist_replace_rcu() primitives
+		may be used to replace an old structure with a new one
+		in their respective types of RCU-protected lists.
+
+	d.	Rules similar to (4b) and (4c) apply to the "hlist_nulls"
+		type of RCU-protected linked lists.
+
+	e.	Updates must ensure that initialization of a given
+		structure happens before pointers to that structure are
+		publicized.  Use the rcu_assign_pointer() primitive
+		when publicizing a pointer to a structure that can
+		be traversed by an RCU read-side critical section.
+
+5.	If call_rcu() or call_srcu() is used, the callback function will
+	be called from softirq context.  In particular, it cannot block.
+
+6.	Since synchronize_rcu() can block, it cannot be called
+	from any sort of irq context.  The same rule applies
+	for synchronize_srcu(), synchronize_rcu_expedited(), and
+	synchronize_srcu_expedited().
+
+	The expedited forms of these primitives have the same semantics
+	as the non-expedited forms, but expediting is both expensive and
+	(with the exception of synchronize_srcu_expedited()) unfriendly
+	to real-time workloads.  Use of the expedited primitives should
+	be restricted to rare configuration-change operations that would
+	not normally be undertaken while a real-time workload is running.
+	However, real-time workloads can use rcupdate.rcu_normal kernel
+	boot parameter to completely disable expedited grace periods,
+	though this might have performance implications.
+
+	In particular, if you find yourself invoking one of the expedited
+	primitives repeatedly in a loop, please do everyone a favor:
+	Restructure your code so that it batches the updates, allowing
+	a single non-expedited primitive to cover the entire batch.
+	This will very likely be faster than the loop containing the
+	expedited primitive, and will be much much easier on the rest
+	of the system, especially to real-time workloads running on
+	the rest of the system.
+
+7.	As of v4.20, a given kernel implements only one RCU flavor,
+	which is RCU-sched for PREEMPT=n and RCU-preempt for PREEMPT=y.
+	If the updater uses call_rcu() or synchronize_rcu(),
+	then the corresponding readers my use rcu_read_lock() and
+	rcu_read_unlock(), rcu_read_lock_bh() and rcu_read_unlock_bh(),
+	or any pair of primitives that disables and re-enables preemption,
+	for example, rcu_read_lock_sched() and rcu_read_unlock_sched().
+	If the updater uses synchronize_srcu() or call_srcu(),
+	then the corresponding readers must use srcu_read_lock() and
+	srcu_read_unlock(), and with the same srcu_struct.  The rules for
+	the expedited primitives are the same as for their non-expedited
+	counterparts.  Mixing things up will result in confusion and
+	broken kernels, and has even resulted in an exploitable security
+	issue.
+
+	One exception to this rule: rcu_read_lock() and rcu_read_unlock()
+	may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh()
+	in cases where local bottom halves are already known to be
+	disabled, for example, in irq or softirq context.  Commenting
+	such cases is a must, of course!  And the jury is still out on
+	whether the increased speed is worth it.
+
+8.	Although synchronize_rcu() is slower than is call_rcu(), it
+	usually results in simpler code.  So, unless update performance is
+	critically important, the updaters cannot block, or the latency of
+	synchronize_rcu() is visible from userspace, synchronize_rcu()
+	should be used in preference to call_rcu().  Furthermore,
+	kfree_rcu() usually results in even simpler code than does
+	synchronize_rcu() without synchronize_rcu()'s multi-millisecond
+	latency.  So please take advantage of kfree_rcu()'s "fire and
+	forget" memory-freeing capabilities where it applies.
+
+	An especially important property of the synchronize_rcu()
+	primitive is that it automatically self-limits: if grace periods
+	are delayed for whatever reason, then the synchronize_rcu()
+	primitive will correspondingly delay updates.  In contrast,
+	code using call_rcu() should explicitly limit update rate in
+	cases where grace periods are delayed, as failing to do so can
+	result in excessive realtime latencies or even OOM conditions.
+
+	Ways of gaining this self-limiting property when using call_rcu()
+	include:
+
+	a.	Keeping a count of the number of data-structure elements
+		used by the RCU-protected data structure, including
+		those waiting for a grace period to elapse.  Enforce a
+		limit on this number, stalling updates as needed to allow
+		previously deferred frees to complete.	Alternatively,
+		limit only the number awaiting deferred free rather than
+		the total number of elements.
+
+		One way to stall the updates is to acquire the update-side
+		mutex.	(Don't try this with a spinlock -- other CPUs
+		spinning on the lock could prevent the grace period
+		from ever ending.)  Another way to stall the updates
+		is for the updates to use a wrapper function around
+		the memory allocator, so that this wrapper function
+		simulates OOM when there is too much memory awaiting an
+		RCU grace period.  There are of course many other
+		variations on this theme.
+
+	b.	Limiting update rate.  For example, if updates occur only
+		once per hour, then no explicit rate limiting is
+		required, unless your system is already badly broken.
+		Older versions of the dcache subsystem take this approach,
+		guarding updates with a global lock, limiting their rate.
+
+	c.	Trusted update -- if updates can only be done manually by
+		superuser or some other trusted user, then it might not
+		be necessary to automatically limit them.  The theory
+		here is that superuser already has lots of ways to crash
+		the machine.
+
+	d.	Periodically invoke synchronize_rcu(), permitting a limited
+		number of updates per grace period.
+
+	The same cautions apply to call_srcu() and kfree_rcu().
+
+	Note that although these primitives do take action to avoid memory
+	exhaustion when any given CPU has too many callbacks, a determined
+	user could still exhaust memory.  This is especially the case
+	if a system with a large number of CPUs has been configured to
+	offload all of its RCU callbacks onto a single CPU, or if the
+	system has relatively little free memory.
+
+9.	All RCU list-traversal primitives, which include
+	rcu_dereference(), list_for_each_entry_rcu(), and
+	list_for_each_safe_rcu(), must be either within an RCU read-side
+	critical section or must be protected by appropriate update-side
+	locks.	RCU read-side critical sections are delimited by
+	rcu_read_lock() and rcu_read_unlock(), or by similar primitives
+	such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which
+	case the matching rcu_dereference() primitive must be used in
+	order to keep lockdep happy, in this case, rcu_dereference_bh().
+
+	The reason that it is permissible to use RCU list-traversal
+	primitives when the update-side lock is held is that doing so
+	can be quite helpful in reducing code bloat when common code is
+	shared between readers and updaters.  Additional primitives
+	are provided for this case, as discussed in lockdep.txt.
+
+10.	Conversely, if you are in an RCU read-side critical section,
+	and you don't hold the appropriate update-side lock, you -must-
+	use the "_rcu()" variants of the list macros.  Failing to do so
+	will break Alpha, cause aggressive compilers to generate bad code,
+	and confuse people trying to read your code.
+
+11.	Any lock acquired by an RCU callback must be acquired elsewhere
+	with softirq disabled, e.g., via spin_lock_irqsave(),
+	spin_lock_bh(), etc.  Failing to disable softirq on a given
+	acquisition of that lock will result in deadlock as soon as
+	the RCU softirq handler happens to run your RCU callback while
+	interrupting that acquisition's critical section.
+
+12.	RCU callbacks can be and are executed in parallel.  In many cases,
+	the callback code simply wrappers around kfree(), so that this
+	is not an issue (or, more accurately, to the extent that it is
+	an issue, the memory-allocator locking handles it).  However,
+	if the callbacks do manipulate a shared data structure, they
+	must use whatever locking or other synchronization is required
+	to safely access and/or modify that data structure.
+
+	Do not assume that RCU callbacks will be executed on the same
+	CPU that executed the corresponding call_rcu() or call_srcu().
+	For example, if a given CPU goes offline while having an RCU
+	callback pending, then that RCU callback will execute on some
+	surviving CPU.	(If this was not the case, a self-spawning RCU
+	callback would prevent the victim CPU from ever going offline.)
+	Furthermore, CPUs designated by rcu_nocbs= might well -always-
+	have their RCU callbacks executed on some other CPUs, in fact,
+	for some  real-time workloads, this is the whole point of using
+	the rcu_nocbs= kernel boot parameter.
+
+13.	Unlike other forms of RCU, it -is- permissible to block in an
+	SRCU read-side critical section (demarked by srcu_read_lock()
+	and srcu_read_unlock()), hence the "SRCU": "sleepable RCU".
+	Please note that if you don't need to sleep in read-side critical
+	sections, you should be using RCU rather than SRCU, because RCU
+	is almost always faster and easier to use than is SRCU.
+
+	Also unlike other forms of RCU, explicit initialization and
+	cleanup is required either at build time via DEFINE_SRCU()
+	or DEFINE_STATIC_SRCU() or at runtime via init_srcu_struct()
+	and cleanup_srcu_struct().  These last two are passed a
+	"struct srcu_struct" that defines the scope of a given
+	SRCU domain.  Once initialized, the srcu_struct is passed
+	to srcu_read_lock(), srcu_read_unlock() synchronize_srcu(),
+	synchronize_srcu_expedited(), and call_srcu().	A given
+	synchronize_srcu() waits only for SRCU read-side critical
+	sections governed by srcu_read_lock() and srcu_read_unlock()
+	calls that have been passed the same srcu_struct.  This property
+	is what makes sleeping read-side critical sections tolerable --
+	a given subsystem delays only its own updates, not those of other
+	subsystems using SRCU.	Therefore, SRCU is less prone to OOM the
+	system than RCU would be if RCU's read-side critical sections
+	were permitted to sleep.
+
+	The ability to sleep in read-side critical sections does not
+	come for free.	First, corresponding srcu_read_lock() and
+	srcu_read_unlock() calls must be passed the same srcu_struct.
+	Second, grace-period-detection overhead is amortized only
+	over those updates sharing a given srcu_struct, rather than
+	being globally amortized as they are for other forms of RCU.
+	Therefore, SRCU should be used in preference to rw_semaphore
+	only in extremely read-intensive situations, or in situations
+	requiring SRCU's read-side deadlock immunity or low read-side
+	realtime latency.  You should also consider percpu_rw_semaphore
+	when you need lightweight readers.
+
+	SRCU's expedited primitive (synchronize_srcu_expedited())
+	never sends IPIs to other CPUs, so it is easier on
+	real-time workloads than is synchronize_rcu_expedited().
+
+	Note that rcu_assign_pointer() relates to SRCU just as it does to
+	other forms of RCU, but instead of rcu_dereference() you should
+	use srcu_dereference() in order to avoid lockdep splats.
+
+14.	The whole point of call_rcu(), synchronize_rcu(), and friends
+	is to wait until all pre-existing readers have finished before
+	carrying out some otherwise-destructive operation.  It is
+	therefore critically important to -first- remove any path
+	that readers can follow that could be affected by the
+	destructive operation, and -only- -then- invoke call_rcu(),
+	synchronize_rcu(), or friends.
+
+	Because these primitives only wait for pre-existing readers, it
+	is the caller's responsibility to guarantee that any subsequent
+	readers will execute safely.
+
+15.	The various RCU read-side primitives do -not- necessarily contain
+	memory barriers.  You should therefore plan for the CPU
+	and the compiler to freely reorder code into and out of RCU
+	read-side critical sections.  It is the responsibility of the
+	RCU update-side primitives to deal with this.
+
+	For SRCU readers, you can use smp_mb__after_srcu_read_unlock()
+	immediately after an srcu_read_unlock() to get a full barrier.
+
+16.	Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the
+	__rcu sparse checks to validate your RCU code.	These can help
+	find problems as follows:
+
+	CONFIG_PROVE_LOCKING:
+		check that accesses to RCU-protected data
+		structures are carried out under the proper RCU
+		read-side critical section, while holding the right
+		combination of locks, or whatever other conditions
+		are appropriate.
+
+	CONFIG_DEBUG_OBJECTS_RCU_HEAD:
+		check that you don't pass the
+		same object to call_rcu() (or friends) before an RCU
+		grace period has elapsed since the last time that you
+		passed that same object to call_rcu() (or friends).
+
+	__rcu sparse checks:
+		tag the pointer to the RCU-protected data
+		structure with __rcu, and sparse will warn you if you
+		access that pointer without the services of one of the
+		variants of rcu_dereference().
+
+	These debugging aids can help you find problems that are
+	otherwise extremely difficult to spot.
+
+17.	If you register a callback using call_rcu() or call_srcu(), and
+	pass in a function defined within a loadable module, then it in
+	necessary to wait for all pending callbacks to be invoked after
+	the last invocation and before unloading that module.  Note that
+	it is absolutely -not- sufficient to wait for a grace period!
+	The current (say) synchronize_rcu() implementation is -not-
+	guaranteed to wait for callbacks registered on other CPUs.
+	Or even on the current CPU if that CPU recently went offline
+	and came back online.
+
+	You instead need to use one of the barrier functions:
+
+	-	call_rcu() -> rcu_barrier()
+	-	call_srcu() -> srcu_barrier()
+
+	However, these barrier functions are absolutely -not- guaranteed
+	to wait for a grace period.  In fact, if there are no call_rcu()
+	callbacks waiting anywhere in the system, rcu_barrier() is within
+	its rights to return immediately.
+
+	So if you need to wait for both an RCU grace period and for
+	all pre-existing call_rcu() callbacks, you will need to execute
+	both rcu_barrier() and synchronize_rcu(), if necessary, using
+	something like workqueues to to execute them concurrently.
+
+	See rcubarrier.txt for more information.
diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt
deleted file mode 100644
index e98ff26..0000000
--- a/Documentation/RCU/checklist.txt
+++ /dev/null
@@ -1,458 +0,0 @@
-Review Checklist for RCU Patches
-
-
-This document contains a checklist for producing and reviewing patches
-that make use of RCU.  Violating any of the rules listed below will
-result in the same sorts of problems that leaving out a locking primitive
-would cause.  This list is based on experiences reviewing such patches
-over a rather long period of time, but improvements are always welcome!
-
-0.	Is RCU being applied to a read-mostly situation?  If the data
-	structure is updated more than about 10% of the time, then you
-	should strongly consider some other approach, unless detailed
-	performance measurements show that RCU is nonetheless the right
-	tool for the job.  Yes, RCU does reduce read-side overhead by
-	increasing write-side overhead, which is exactly why normal uses
-	of RCU will do much more reading than updating.
-
-	Another exception is where performance is not an issue, and RCU
-	provides a simpler implementation.  An example of this situation
-	is the dynamic NMI code in the Linux 2.6 kernel, at least on
-	architectures where NMIs are rare.
-
-	Yet another exception is where the low real-time latency of RCU's
-	read-side primitives is critically important.
-
-	One final exception is where RCU readers are used to prevent
-	the ABA problem (https://en.wikipedia.org/wiki/ABA_problem)
-	for lockless updates.  This does result in the mildly
-	counter-intuitive situation where rcu_read_lock() and
-	rcu_read_unlock() are used to protect updates, however, this
-	approach provides the same potential simplifications that garbage
-	collectors do.
-
-1.	Does the update code have proper mutual exclusion?
-
-	RCU does allow -readers- to run (almost) naked, but -writers- must
-	still use some sort of mutual exclusion, such as:
-
-	a.	locking,
-	b.	atomic operations, or
-	c.	restricting updates to a single task.
-
-	If you choose #b, be prepared to describe how you have handled
-	memory barriers on weakly ordered machines (pretty much all of
-	them -- even x86 allows later loads to be reordered to precede
-	earlier stores), and be prepared to explain why this added
-	complexity is worthwhile.  If you choose #c, be prepared to
-	explain how this single task does not become a major bottleneck on
-	big multiprocessor machines (for example, if the task is updating
-	information relating to itself that other tasks can read, there
-	by definition can be no bottleneck).  Note that the definition
-	of "large" has changed significantly:  Eight CPUs was "large"
-	in the year 2000, but a hundred CPUs was unremarkable in 2017.
-
-2.	Do the RCU read-side critical sections make proper use of
-	rcu_read_lock() and friends?  These primitives are needed
-	to prevent grace periods from ending prematurely, which
-	could result in data being unceremoniously freed out from
-	under your read-side code, which can greatly increase the
-	actuarial risk of your kernel.
-
-	As a rough rule of thumb, any dereference of an RCU-protected
-	pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(),
-	rcu_read_lock_sched(), or by the appropriate update-side lock.
-	Disabling of preemption can serve as rcu_read_lock_sched(), but
-	is less readable and prevents lockdep from detecting locking issues.
-
-	Letting RCU-protected pointers "leak" out of an RCU read-side
-	critical section is every bid as bad as letting them leak out
-	from under a lock.  Unless, of course, you have arranged some
-	other means of protection, such as a lock or a reference count
-	-before- letting them out of the RCU read-side critical section.
-
-3.	Does the update code tolerate concurrent accesses?
-
-	The whole point of RCU is to permit readers to run without
-	any locks or atomic operations.  This means that readers will
-	be running while updates are in progress.  There are a number
-	of ways to handle this concurrency, depending on the situation:
-
-	a.	Use the RCU variants of the list and hlist update
-		primitives to add, remove, and replace elements on
-		an RCU-protected list.	Alternatively, use the other
-		RCU-protected data structures that have been added to
-		the Linux kernel.
-
-		This is almost always the best approach.
-
-	b.	Proceed as in (a) above, but also maintain per-element
-		locks (that are acquired by both readers and writers)
-		that guard per-element state.  Of course, fields that
-		the readers refrain from accessing can be guarded by
-		some other lock acquired only by updaters, if desired.
-
-		This works quite well, also.
-
-	c.	Make updates appear atomic to readers.	For example,
-		pointer updates to properly aligned fields will
-		appear atomic, as will individual atomic primitives.
-		Sequences of operations performed under a lock will -not-
-		appear to be atomic to RCU readers, nor will sequences
-		of multiple atomic primitives.
-
-		This can work, but is starting to get a bit tricky.
-
-	d.	Carefully order the updates and the reads so that
-		readers see valid data at all phases of the update.
-		This is often more difficult than it sounds, especially
-		given modern CPUs' tendency to reorder memory references.
-		One must usually liberally sprinkle memory barriers
-		(smp_wmb(), smp_rmb(), smp_mb()) through the code,
-		making it difficult to understand and to test.
-
-		It is usually better to group the changing data into
-		a separate structure, so that the change may be made
-		to appear atomic by updating a pointer to reference
-		a new structure containing updated values.
-
-4.	Weakly ordered CPUs pose special challenges.  Almost all CPUs
-	are weakly ordered -- even x86 CPUs allow later loads to be
-	reordered to precede earlier stores.  RCU code must take all of
-	the following measures to prevent memory-corruption problems:
-
-	a.	Readers must maintain proper ordering of their memory
-		accesses.  The rcu_dereference() primitive ensures that
-		the CPU picks up the pointer before it picks up the data
-		that the pointer points to.  This really is necessary
-		on Alpha CPUs.	If you don't believe me, see:
-
-			http://www.openvms.compaq.com/wizard/wiz_2637.html
-
-		The rcu_dereference() primitive is also an excellent
-		documentation aid, letting the person reading the
-		code know exactly which pointers are protected by RCU.
-		Please note that compilers can also reorder code, and
-		they are becoming increasingly aggressive about doing
-		just that.  The rcu_dereference() primitive therefore also
-		prevents destructive compiler optimizations.  However,
-		with a bit of devious creativity, it is possible to
-		mishandle the return value from rcu_dereference().
-		Please see rcu_dereference.txt in this directory for
-		more information.
-
-		The rcu_dereference() primitive is used by the
-		various "_rcu()" list-traversal primitives, such
-		as the list_for_each_entry_rcu().  Note that it is
-		perfectly legal (if redundant) for update-side code to
-		use rcu_dereference() and the "_rcu()" list-traversal
-		primitives.  This is particularly useful in code that
-		is common to readers and updaters.  However, lockdep
-		will complain if you access rcu_dereference() outside
-		of an RCU read-side critical section.  See lockdep.txt
-		to learn what to do about this.
-
-		Of course, neither rcu_dereference() nor the "_rcu()"
-		list-traversal primitives can substitute for a good
-		concurrency design coordinating among multiple updaters.
-
-	b.	If the list macros are being used, the list_add_tail_rcu()
-		and list_add_rcu() primitives must be used in order
-		to prevent weakly ordered machines from misordering
-		structure initialization and pointer planting.
-		Similarly, if the hlist macros are being used, the
-		hlist_add_head_rcu() primitive is required.
-
-	c.	If the list macros are being used, the list_del_rcu()
-		primitive must be used to keep list_del()'s pointer
-		poisoning from inflicting toxic effects on concurrent
-		readers.  Similarly, if the hlist macros are being used,
-		the hlist_del_rcu() primitive is required.
-
-		The list_replace_rcu() and hlist_replace_rcu() primitives
-		may be used to replace an old structure with a new one
-		in their respective types of RCU-protected lists.
-
-	d.	Rules similar to (4b) and (4c) apply to the "hlist_nulls"
-		type of RCU-protected linked lists.
-
-	e.	Updates must ensure that initialization of a given
-		structure happens before pointers to that structure are
-		publicized.  Use the rcu_assign_pointer() primitive
-		when publicizing a pointer to a structure that can
-		be traversed by an RCU read-side critical section.
-
-5.	If call_rcu() or call_srcu() is used, the callback function will
-	be called from softirq context.  In particular, it cannot block.
-
-6.	Since synchronize_rcu() can block, it cannot be called
-	from any sort of irq context.  The same rule applies
-	for synchronize_srcu(), synchronize_rcu_expedited(), and
-	synchronize_srcu_expedited().
-
-	The expedited forms of these primitives have the same semantics
-	as the non-expedited forms, but expediting is both expensive and
-	(with the exception of synchronize_srcu_expedited()) unfriendly
-	to real-time workloads.  Use of the expedited primitives should
-	be restricted to rare configuration-change operations that would
-	not normally be undertaken while a real-time workload is running.
-	However, real-time workloads can use rcupdate.rcu_normal kernel
-	boot parameter to completely disable expedited grace periods,
-	though this might have performance implications.
-
-	In particular, if you find yourself invoking one of the expedited
-	primitives repeatedly in a loop, please do everyone a favor:
-	Restructure your code so that it batches the updates, allowing
-	a single non-expedited primitive to cover the entire batch.
-	This will very likely be faster than the loop containing the
-	expedited primitive, and will be much much easier on the rest
-	of the system, especially to real-time workloads running on
-	the rest of the system.
-
-7.	As of v4.20, a given kernel implements only one RCU flavor,
-	which is RCU-sched for PREEMPT=n and RCU-preempt for PREEMPT=y.
-	If the updater uses call_rcu() or synchronize_rcu(),
-	then the corresponding readers my use rcu_read_lock() and
-	rcu_read_unlock(), rcu_read_lock_bh() and rcu_read_unlock_bh(),
-	or any pair of primitives that disables and re-enables preemption,
-	for example, rcu_read_lock_sched() and rcu_read_unlock_sched().
-	If the updater uses synchronize_srcu() or call_srcu(),
-	then the corresponding readers must use srcu_read_lock() and
-	srcu_read_unlock(), and with the same srcu_struct.  The rules for
-	the expedited primitives are the same as for their non-expedited
-	counterparts.  Mixing things up will result in confusion and
-	broken kernels, and has even resulted in an exploitable security
-	issue.
-
-	One exception to this rule: rcu_read_lock() and rcu_read_unlock()
-	may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh()
-	in cases where local bottom halves are already known to be
-	disabled, for example, in irq or softirq context.  Commenting
-	such cases is a must, of course!  And the jury is still out on
-	whether the increased speed is worth it.
-
-8.	Although synchronize_rcu() is slower than is call_rcu(), it
-	usually results in simpler code.  So, unless update performance is
-	critically important, the updaters cannot block, or the latency of
-	synchronize_rcu() is visible from userspace, synchronize_rcu()
-	should be used in preference to call_rcu().  Furthermore,
-	kfree_rcu() usually results in even simpler code than does
-	synchronize_rcu() without synchronize_rcu()'s multi-millisecond
-	latency.  So please take advantage of kfree_rcu()'s "fire and
-	forget" memory-freeing capabilities where it applies.
-
-	An especially important property of the synchronize_rcu()
-	primitive is that it automatically self-limits: if grace periods
-	are delayed for whatever reason, then the synchronize_rcu()
-	primitive will correspondingly delay updates.  In contrast,
-	code using call_rcu() should explicitly limit update rate in
-	cases where grace periods are delayed, as failing to do so can
-	result in excessive realtime latencies or even OOM conditions.
-
-	Ways of gaining this self-limiting property when using call_rcu()
-	include:
-
-	a.	Keeping a count of the number of data-structure elements
-		used by the RCU-protected data structure, including
-		those waiting for a grace period to elapse.  Enforce a
-		limit on this number, stalling updates as needed to allow
-		previously deferred frees to complete.	Alternatively,
-		limit only the number awaiting deferred free rather than
-		the total number of elements.
-
-		One way to stall the updates is to acquire the update-side
-		mutex.	(Don't try this with a spinlock -- other CPUs
-		spinning on the lock could prevent the grace period
-		from ever ending.)  Another way to stall the updates
-		is for the updates to use a wrapper function around
-		the memory allocator, so that this wrapper function
-		simulates OOM when there is too much memory awaiting an
-		RCU grace period.  There are of course many other
-		variations on this theme.
-
-	b.	Limiting update rate.  For example, if updates occur only
-		once per hour, then no explicit rate limiting is
-		required, unless your system is already badly broken.
-		Older versions of the dcache subsystem take this approach,
-		guarding updates with a global lock, limiting their rate.
-
-	c.	Trusted update -- if updates can only be done manually by
-		superuser or some other trusted user, then it might not
-		be necessary to automatically limit them.  The theory
-		here is that superuser already has lots of ways to crash
-		the machine.
-
-	d.	Periodically invoke synchronize_rcu(), permitting a limited
-		number of updates per grace period.
-
-	The same cautions apply to call_srcu() and kfree_rcu().
-
-	Note that although these primitives do take action to avoid memory
-	exhaustion when any given CPU has too many callbacks, a determined
-	user could still exhaust memory.  This is especially the case
-	if a system with a large number of CPUs has been configured to
-	offload all of its RCU callbacks onto a single CPU, or if the
-	system has relatively little free memory.
-
-9.	All RCU list-traversal primitives, which include
-	rcu_dereference(), list_for_each_entry_rcu(), and
-	list_for_each_safe_rcu(), must be either within an RCU read-side
-	critical section or must be protected by appropriate update-side
-	locks.	RCU read-side critical sections are delimited by
-	rcu_read_lock() and rcu_read_unlock(), or by similar primitives
-	such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which
-	case the matching rcu_dereference() primitive must be used in
-	order to keep lockdep happy, in this case, rcu_dereference_bh().
-
-	The reason that it is permissible to use RCU list-traversal
-	primitives when the update-side lock is held is that doing so
-	can be quite helpful in reducing code bloat when common code is
-	shared between readers and updaters.  Additional primitives
-	are provided for this case, as discussed in lockdep.txt.
-
-10.	Conversely, if you are in an RCU read-side critical section,
-	and you don't hold the appropriate update-side lock, you -must-
-	use the "_rcu()" variants of the list macros.  Failing to do so
-	will break Alpha, cause aggressive compilers to generate bad code,
-	and confuse people trying to read your code.
-
-11.	Any lock acquired by an RCU callback must be acquired elsewhere
-	with softirq disabled, e.g., via spin_lock_irqsave(),
-	spin_lock_bh(), etc.  Failing to disable softirq on a given
-	acquisition of that lock will result in deadlock as soon as
-	the RCU softirq handler happens to run your RCU callback while
-	interrupting that acquisition's critical section.
-
-12.	RCU callbacks can be and are executed in parallel.  In many cases,
-	the callback code simply wrappers around kfree(), so that this
-	is not an issue (or, more accurately, to the extent that it is
-	an issue, the memory-allocator locking handles it).  However,
-	if the callbacks do manipulate a shared data structure, they
-	must use whatever locking or other synchronization is required
-	to safely access and/or modify that data structure.
-
-	Do not assume that RCU callbacks will be executed on the same
-	CPU that executed the corresponding call_rcu() or call_srcu().
-	For example, if a given CPU goes offline while having an RCU
-	callback pending, then that RCU callback will execute on some
-	surviving CPU.	(If this was not the case, a self-spawning RCU
-	callback would prevent the victim CPU from ever going offline.)
-	Furthermore, CPUs designated by rcu_nocbs= might well -always-
-	have their RCU callbacks executed on some other CPUs, in fact,
-	for some  real-time workloads, this is the whole point of using
-	the rcu_nocbs= kernel boot parameter.
-
-13.	Unlike other forms of RCU, it -is- permissible to block in an
-	SRCU read-side critical section (demarked by srcu_read_lock()
-	and srcu_read_unlock()), hence the "SRCU": "sleepable RCU".
-	Please note that if you don't need to sleep in read-side critical
-	sections, you should be using RCU rather than SRCU, because RCU
-	is almost always faster and easier to use than is SRCU.
-
-	Also unlike other forms of RCU, explicit initialization and
-	cleanup is required either at build time via DEFINE_SRCU()
-	or DEFINE_STATIC_SRCU() or at runtime via init_srcu_struct()
-	and cleanup_srcu_struct().  These last two are passed a
-	"struct srcu_struct" that defines the scope of a given
-	SRCU domain.  Once initialized, the srcu_struct is passed
-	to srcu_read_lock(), srcu_read_unlock() synchronize_srcu(),
-	synchronize_srcu_expedited(), and call_srcu().	A given
-	synchronize_srcu() waits only for SRCU read-side critical
-	sections governed by srcu_read_lock() and srcu_read_unlock()
-	calls that have been passed the same srcu_struct.  This property
-	is what makes sleeping read-side critical sections tolerable --
-	a given subsystem delays only its own updates, not those of other
-	subsystems using SRCU.	Therefore, SRCU is less prone to OOM the
-	system than RCU would be if RCU's read-side critical sections
-	were permitted to sleep.
-
-	The ability to sleep in read-side critical sections does not
-	come for free.	First, corresponding srcu_read_lock() and
-	srcu_read_unlock() calls must be passed the same srcu_struct.
-	Second, grace-period-detection overhead is amortized only
-	over those updates sharing a given srcu_struct, rather than
-	being globally amortized as they are for other forms of RCU.
-	Therefore, SRCU should be used in preference to rw_semaphore
-	only in extremely read-intensive situations, or in situations
-	requiring SRCU's read-side deadlock immunity or low read-side
-	realtime latency.  You should also consider percpu_rw_semaphore
-	when you need lightweight readers.
-
-	SRCU's expedited primitive (synchronize_srcu_expedited())
-	never sends IPIs to other CPUs, so it is easier on
-	real-time workloads than is synchronize_rcu_expedited().
-
-	Note that rcu_assign_pointer() relates to SRCU just as it does to
-	other forms of RCU, but instead of rcu_dereference() you should
-	use srcu_dereference() in order to avoid lockdep splats.
-
-14.	The whole point of call_rcu(), synchronize_rcu(), and friends
-	is to wait until all pre-existing readers have finished before
-	carrying out some otherwise-destructive operation.  It is
-	therefore critically important to -first- remove any path
-	that readers can follow that could be affected by the
-	destructive operation, and -only- -then- invoke call_rcu(),
-	synchronize_rcu(), or friends.
-
-	Because these primitives only wait for pre-existing readers, it
-	is the caller's responsibility to guarantee that any subsequent
-	readers will execute safely.
-
-15.	The various RCU read-side primitives do -not- necessarily contain
-	memory barriers.  You should therefore plan for the CPU
-	and the compiler to freely reorder code into and out of RCU
-	read-side critical sections.  It is the responsibility of the
-	RCU update-side primitives to deal with this.
-
-	For SRCU readers, you can use smp_mb__after_srcu_read_unlock()
-	immediately after an srcu_read_unlock() to get a full barrier.
-
-16.	Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the
-	__rcu sparse checks to validate your RCU code.	These can help
-	find problems as follows:
-
-	CONFIG_PROVE_LOCKING: check that accesses to RCU-protected data
-		structures are carried out under the proper RCU
-		read-side critical section, while holding the right
-		combination of locks, or whatever other conditions
-		are appropriate.
-
-	CONFIG_DEBUG_OBJECTS_RCU_HEAD: check that you don't pass the
-		same object to call_rcu() (or friends) before an RCU
-		grace period has elapsed since the last time that you
-		passed that same object to call_rcu() (or friends).
-
-	__rcu sparse checks: tag the pointer to the RCU-protected data
-		structure with __rcu, and sparse will warn you if you
-		access that pointer without the services of one of the
-		variants of rcu_dereference().
-
-	These debugging aids can help you find problems that are
-	otherwise extremely difficult to spot.
-
-17.	If you register a callback using call_rcu() or call_srcu(), and
-	pass in a function defined within a loadable module, then it in
-	necessary to wait for all pending callbacks to be invoked after
-	the last invocation and before unloading that module.  Note that
-	it is absolutely -not- sufficient to wait for a grace period!
-	The current (say) synchronize_rcu() implementation is -not-
-	guaranteed to wait for callbacks registered on other CPUs.
-	Or even on the current CPU if that CPU recently went offline
-	and came back online.
-
-	You instead need to use one of the barrier functions:
-
-	o	call_rcu() -> rcu_barrier()
-	o	call_srcu() -> srcu_barrier()
-
-	However, these barrier functions are absolutely -not- guaranteed
-	to wait for a grace period.  In fact, if there are no call_rcu()
-	callbacks waiting anywhere in the system, rcu_barrier() is within
-	its rights to return immediately.
-
-	So if you need to wait for both an RCU grace period and for
-	all pre-existing call_rcu() callbacks, you will need to execute
-	both rcu_barrier() and synchronize_rcu(), if necessary, using
-	something like workqueues to to execute them concurrently.
-
-	See rcubarrier.txt for more information.
diff --git a/Documentation/RCU/index.rst b/Documentation/RCU/index.rst
index 81a0a1e..e703d3d 100644
--- a/Documentation/RCU/index.rst
+++ b/Documentation/RCU/index.rst
@@ -1,3 +1,5 @@
+.. SPDX-License-Identifier: GPL-2.0
+
 .. _rcu_concepts:
 
 ============
@@ -8,10 +10,17 @@
    :maxdepth: 3
 
    arrayRCU
+   checklist
+   lockdep
+   lockdep-splat
    rcubarrier
    rcu_dereference
    whatisRCU
    rcu
+   rculist_nulls
+   rcuref
+   torture
+   stallwarn
    listRCU
    NMI-RCU
    UP
diff --git a/Documentation/RCU/lockdep-splat.rst b/Documentation/RCU/lockdep-splat.rst
new file mode 100644
index 0000000..2a5c79d
--- /dev/null
+++ b/Documentation/RCU/lockdep-splat.rst
@@ -0,0 +1,115 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=================
+Lockdep-RCU Splat
+=================
+
+Lockdep-RCU was added to the Linux kernel in early 2010
+(http://lwn.net/Articles/371986/).  This facility checks for some common
+misuses of the RCU API, most notably using one of the rcu_dereference()
+family to access an RCU-protected pointer without the proper protection.
+When such misuse is detected, an lockdep-RCU splat is emitted.
+
+The usual cause of a lockdep-RCU slat is someone accessing an
+RCU-protected data structure without either (1) being in the right kind of
+RCU read-side critical section or (2) holding the right update-side lock.
+This problem can therefore be serious: it might result in random memory
+overwriting or worse.  There can of course be false positives, this
+being the real world and all that.
+
+So let's look at an example RCU lockdep splat from 3.0-rc5, one that
+has long since been fixed::
+
+    =============================
+    WARNING: suspicious RCU usage
+    -----------------------------
+    block/cfq-iosched.c:2776 suspicious rcu_dereference_protected() usage!
+
+other info that might help us debug this::
+
+    rcu_scheduler_active = 1, debug_locks = 0
+    3 locks held by scsi_scan_6/1552:
+    #0:  (&shost->scan_mutex){+.+.}, at: [<ffffffff8145efca>]
+    scsi_scan_host_selected+0x5a/0x150
+    #1:  (&eq->sysfs_lock){+.+.}, at: [<ffffffff812a5032>]
+    elevator_exit+0x22/0x60
+    #2:  (&(&q->__queue_lock)->rlock){-.-.}, at: [<ffffffff812b6233>]
+    cfq_exit_queue+0x43/0x190
+
+    stack backtrace:
+    Pid: 1552, comm: scsi_scan_6 Not tainted 3.0.0-rc5 #17
+    Call Trace:
+    [<ffffffff810abb9b>] lockdep_rcu_dereference+0xbb/0xc0
+    [<ffffffff812b6139>] __cfq_exit_single_io_context+0xe9/0x120
+    [<ffffffff812b626c>] cfq_exit_queue+0x7c/0x190
+    [<ffffffff812a5046>] elevator_exit+0x36/0x60
+    [<ffffffff812a802a>] blk_cleanup_queue+0x4a/0x60
+    [<ffffffff8145cc09>] scsi_free_queue+0x9/0x10
+    [<ffffffff81460944>] __scsi_remove_device+0x84/0xd0
+    [<ffffffff8145dca3>] scsi_probe_and_add_lun+0x353/0xb10
+    [<ffffffff817da069>] ? error_exit+0x29/0xb0
+    [<ffffffff817d98ed>] ? _raw_spin_unlock_irqrestore+0x3d/0x80
+    [<ffffffff8145e722>] __scsi_scan_target+0x112/0x680
+    [<ffffffff812c690d>] ? trace_hardirqs_off_thunk+0x3a/0x3c
+    [<ffffffff817da069>] ? error_exit+0x29/0xb0
+    [<ffffffff812bcc60>] ? kobject_del+0x40/0x40
+    [<ffffffff8145ed16>] scsi_scan_channel+0x86/0xb0
+    [<ffffffff8145f0b0>] scsi_scan_host_selected+0x140/0x150
+    [<ffffffff8145f149>] do_scsi_scan_host+0x89/0x90
+    [<ffffffff8145f170>] do_scan_async+0x20/0x160
+    [<ffffffff8145f150>] ? do_scsi_scan_host+0x90/0x90
+    [<ffffffff810975b6>] kthread+0xa6/0xb0
+    [<ffffffff817db154>] kernel_thread_helper+0x4/0x10
+    [<ffffffff81066430>] ? finish_task_switch+0x80/0x110
+    [<ffffffff817d9c04>] ? retint_restore_args+0xe/0xe
+    [<ffffffff81097510>] ? __kthread_init_worker+0x70/0x70
+    [<ffffffff817db150>] ? gs_change+0xb/0xb
+
+Line 2776 of block/cfq-iosched.c in v3.0-rc5 is as follows::
+
+	if (rcu_dereference(ioc->ioc_data) == cic) {
+
+This form says that it must be in a plain vanilla RCU read-side critical
+section, but the "other info" list above shows that this is not the
+case.  Instead, we hold three locks, one of which might be RCU related.
+And maybe that lock really does protect this reference.  If so, the fix
+is to inform RCU, perhaps by changing __cfq_exit_single_io_context() to
+take the struct request_queue "q" from cfq_exit_queue() as an argument,
+which would permit us to invoke rcu_dereference_protected as follows::
+
+	if (rcu_dereference_protected(ioc->ioc_data,
+				      lockdep_is_held(&q->queue_lock)) == cic) {
+
+With this change, there would be no lockdep-RCU splat emitted if this
+code was invoked either from within an RCU read-side critical section
+or with the ->queue_lock held.  In particular, this would have suppressed
+the above lockdep-RCU splat because ->queue_lock is held (see #2 in the
+list above).
+
+On the other hand, perhaps we really do need an RCU read-side critical
+section.  In this case, the critical section must span the use of the
+return value from rcu_dereference(), or at least until there is some
+reference count incremented or some such.  One way to handle this is to
+add rcu_read_lock() and rcu_read_unlock() as follows::
+
+	rcu_read_lock();
+	if (rcu_dereference(ioc->ioc_data) == cic) {
+		spin_lock(&ioc->lock);
+		rcu_assign_pointer(ioc->ioc_data, NULL);
+		spin_unlock(&ioc->lock);
+	}
+	rcu_read_unlock();
+
+With this change, the rcu_dereference() is always within an RCU
+read-side critical section, which again would have suppressed the
+above lockdep-RCU splat.
+
+But in this particular case, we don't actually dereference the pointer
+returned from rcu_dereference().  Instead, that pointer is just compared
+to the cic pointer, which means that the rcu_dereference() can be replaced
+by rcu_access_pointer() as follows::
+
+	if (rcu_access_pointer(ioc->ioc_data) == cic) {
+
+Because it is legal to invoke rcu_access_pointer() without protection,
+this change would also suppress the above lockdep-RCU splat.
diff --git a/Documentation/RCU/lockdep-splat.txt b/Documentation/RCU/lockdep-splat.txt
deleted file mode 100644
index b809631..0000000
--- a/Documentation/RCU/lockdep-splat.txt
+++ /dev/null
@@ -1,110 +0,0 @@
-Lockdep-RCU was added to the Linux kernel in early 2010
-(http://lwn.net/Articles/371986/).  This facility checks for some common
-misuses of the RCU API, most notably using one of the rcu_dereference()
-family to access an RCU-protected pointer without the proper protection.
-When such misuse is detected, an lockdep-RCU splat is emitted.
-
-The usual cause of a lockdep-RCU slat is someone accessing an
-RCU-protected data structure without either (1) being in the right kind of
-RCU read-side critical section or (2) holding the right update-side lock.
-This problem can therefore be serious: it might result in random memory
-overwriting or worse.  There can of course be false positives, this
-being the real world and all that.
-
-So let's look at an example RCU lockdep splat from 3.0-rc5, one that
-has long since been fixed:
-
-=============================
-WARNING: suspicious RCU usage
------------------------------
-block/cfq-iosched.c:2776 suspicious rcu_dereference_protected() usage!
-
-other info that might help us debug this:
-
-
-rcu_scheduler_active = 1, debug_locks = 0
-3 locks held by scsi_scan_6/1552:
- #0:  (&shost->scan_mutex){+.+.}, at: [<ffffffff8145efca>]
-scsi_scan_host_selected+0x5a/0x150
- #1:  (&eq->sysfs_lock){+.+.}, at: [<ffffffff812a5032>]
-elevator_exit+0x22/0x60
- #2:  (&(&q->__queue_lock)->rlock){-.-.}, at: [<ffffffff812b6233>]
-cfq_exit_queue+0x43/0x190
-
-stack backtrace:
-Pid: 1552, comm: scsi_scan_6 Not tainted 3.0.0-rc5 #17
-Call Trace:
- [<ffffffff810abb9b>] lockdep_rcu_dereference+0xbb/0xc0
- [<ffffffff812b6139>] __cfq_exit_single_io_context+0xe9/0x120
- [<ffffffff812b626c>] cfq_exit_queue+0x7c/0x190
- [<ffffffff812a5046>] elevator_exit+0x36/0x60
- [<ffffffff812a802a>] blk_cleanup_queue+0x4a/0x60
- [<ffffffff8145cc09>] scsi_free_queue+0x9/0x10
- [<ffffffff81460944>] __scsi_remove_device+0x84/0xd0
- [<ffffffff8145dca3>] scsi_probe_and_add_lun+0x353/0xb10
- [<ffffffff817da069>] ? error_exit+0x29/0xb0
- [<ffffffff817d98ed>] ? _raw_spin_unlock_irqrestore+0x3d/0x80
- [<ffffffff8145e722>] __scsi_scan_target+0x112/0x680
- [<ffffffff812c690d>] ? trace_hardirqs_off_thunk+0x3a/0x3c
- [<ffffffff817da069>] ? error_exit+0x29/0xb0
- [<ffffffff812bcc60>] ? kobject_del+0x40/0x40
- [<ffffffff8145ed16>] scsi_scan_channel+0x86/0xb0
- [<ffffffff8145f0b0>] scsi_scan_host_selected+0x140/0x150
- [<ffffffff8145f149>] do_scsi_scan_host+0x89/0x90
- [<ffffffff8145f170>] do_scan_async+0x20/0x160
- [<ffffffff8145f150>] ? do_scsi_scan_host+0x90/0x90
- [<ffffffff810975b6>] kthread+0xa6/0xb0
- [<ffffffff817db154>] kernel_thread_helper+0x4/0x10
- [<ffffffff81066430>] ? finish_task_switch+0x80/0x110
- [<ffffffff817d9c04>] ? retint_restore_args+0xe/0xe
- [<ffffffff81097510>] ? __kthread_init_worker+0x70/0x70
- [<ffffffff817db150>] ? gs_change+0xb/0xb
-
-Line 2776 of block/cfq-iosched.c in v3.0-rc5 is as follows:
-
-	if (rcu_dereference(ioc->ioc_data) == cic) {
-
-This form says that it must be in a plain vanilla RCU read-side critical
-section, but the "other info" list above shows that this is not the
-case.  Instead, we hold three locks, one of which might be RCU related.
-And maybe that lock really does protect this reference.  If so, the fix
-is to inform RCU, perhaps by changing __cfq_exit_single_io_context() to
-take the struct request_queue "q" from cfq_exit_queue() as an argument,
-which would permit us to invoke rcu_dereference_protected as follows:
-
-	if (rcu_dereference_protected(ioc->ioc_data,
-				      lockdep_is_held(&q->queue_lock)) == cic) {
-
-With this change, there would be no lockdep-RCU splat emitted if this
-code was invoked either from within an RCU read-side critical section
-or with the ->queue_lock held.  In particular, this would have suppressed
-the above lockdep-RCU splat because ->queue_lock is held (see #2 in the
-list above).
-
-On the other hand, perhaps we really do need an RCU read-side critical
-section.  In this case, the critical section must span the use of the
-return value from rcu_dereference(), or at least until there is some
-reference count incremented or some such.  One way to handle this is to
-add rcu_read_lock() and rcu_read_unlock() as follows:
-
-	rcu_read_lock();
-	if (rcu_dereference(ioc->ioc_data) == cic) {
-		spin_lock(&ioc->lock);
-		rcu_assign_pointer(ioc->ioc_data, NULL);
-		spin_unlock(&ioc->lock);
-	}
-	rcu_read_unlock();
-
-With this change, the rcu_dereference() is always within an RCU
-read-side critical section, which again would have suppressed the
-above lockdep-RCU splat.
-
-But in this particular case, we don't actually dereference the pointer
-returned from rcu_dereference().  Instead, that pointer is just compared
-to the cic pointer, which means that the rcu_dereference() can be replaced
-by rcu_access_pointer() as follows:
-
-	if (rcu_access_pointer(ioc->ioc_data) == cic) {
-
-Because it is legal to invoke rcu_access_pointer() without protection,
-this change would also suppress the above lockdep-RCU splat.
diff --git a/Documentation/RCU/lockdep.rst b/Documentation/RCU/lockdep.rst
new file mode 100644
index 0000000..cc860a0
--- /dev/null
+++ b/Documentation/RCU/lockdep.rst
@@ -0,0 +1,116 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+========================
+RCU and lockdep checking
+========================
+
+All flavors of RCU have lockdep checking available, so that lockdep is
+aware of when each task enters and leaves any flavor of RCU read-side
+critical section.  Each flavor of RCU is tracked separately (but note
+that this is not the case in 2.6.32 and earlier).  This allows lockdep's
+tracking to include RCU state, which can sometimes help when debugging
+deadlocks and the like.
+
+In addition, RCU provides the following primitives that check lockdep's
+state::
+
+	rcu_read_lock_held() for normal RCU.
+	rcu_read_lock_bh_held() for RCU-bh.
+	rcu_read_lock_sched_held() for RCU-sched.
+	srcu_read_lock_held() for SRCU.
+
+These functions are conservative, and will therefore return 1 if they
+aren't certain (for example, if CONFIG_DEBUG_LOCK_ALLOC is not set).
+This prevents things like WARN_ON(!rcu_read_lock_held()) from giving false
+positives when lockdep is disabled.
+
+In addition, a separate kernel config parameter CONFIG_PROVE_RCU enables
+checking of rcu_dereference() primitives:
+
+	rcu_dereference(p):
+		Check for RCU read-side critical section.
+	rcu_dereference_bh(p):
+		Check for RCU-bh read-side critical section.
+	rcu_dereference_sched(p):
+		Check for RCU-sched read-side critical section.
+	srcu_dereference(p, sp):
+		Check for SRCU read-side critical section.
+	rcu_dereference_check(p, c):
+		Use explicit check expression "c" along with
+		rcu_read_lock_held().  This is useful in code that is
+		invoked by both RCU readers and updaters.
+	rcu_dereference_bh_check(p, c):
+		Use explicit check expression "c" along with
+		rcu_read_lock_bh_held().  This is useful in code that
+		is invoked by both RCU-bh readers and updaters.
+	rcu_dereference_sched_check(p, c):
+		Use explicit check expression "c" along with
+		rcu_read_lock_sched_held().  This is useful in code that
+		is invoked by both RCU-sched readers and updaters.
+	srcu_dereference_check(p, c):
+		Use explicit check expression "c" along with
+		srcu_read_lock_held().  This is useful in code that
+		is invoked by both SRCU readers and updaters.
+	rcu_dereference_raw(p):
+		Don't check.  (Use sparingly, if at all.)
+	rcu_dereference_protected(p, c):
+		Use explicit check expression "c", and omit all barriers
+		and compiler constraints.  This is useful when the data
+		structure cannot change, for example, in code that is
+		invoked only by updaters.
+	rcu_access_pointer(p):
+		Return the value of the pointer and omit all barriers,
+		but retain the compiler constraints that prevent duplicating
+		or coalescsing.  This is useful when when testing the
+		value of the pointer itself, for example, against NULL.
+
+The rcu_dereference_check() check expression can be any boolean
+expression, but would normally include a lockdep expression.  However,
+any boolean expression can be used.  For a moderately ornate example,
+consider the following::
+
+	file = rcu_dereference_check(fdt->fd[fd],
+				     lockdep_is_held(&files->file_lock) ||
+				     atomic_read(&files->count) == 1);
+
+This expression picks up the pointer "fdt->fd[fd]" in an RCU-safe manner,
+and, if CONFIG_PROVE_RCU is configured, verifies that this expression
+is used in:
+
+1.	An RCU read-side critical section (implicit), or
+2.	with files->file_lock held, or
+3.	on an unshared files_struct.
+
+In case (1), the pointer is picked up in an RCU-safe manner for vanilla
+RCU read-side critical sections, in case (2) the ->file_lock prevents
+any change from taking place, and finally, in case (3) the current task
+is the only task accessing the file_struct, again preventing any change
+from taking place.  If the above statement was invoked only from updater
+code, it could instead be written as follows::
+
+	file = rcu_dereference_protected(fdt->fd[fd],
+					 lockdep_is_held(&files->file_lock) ||
+					 atomic_read(&files->count) == 1);
+
+This would verify cases #2 and #3 above, and furthermore lockdep would
+complain if this was used in an RCU read-side critical section unless one
+of these two cases held.  Because rcu_dereference_protected() omits all
+barriers and compiler constraints, it generates better code than do the
+other flavors of rcu_dereference().  On the other hand, it is illegal
+to use rcu_dereference_protected() if either the RCU-protected pointer
+or the RCU-protected data that it points to can change concurrently.
+
+Like rcu_dereference(), when lockdep is enabled, RCU list and hlist
+traversal primitives check for being called from within an RCU read-side
+critical section.  However, a lockdep expression can be passed to them
+as a additional optional argument.  With this lockdep expression, these
+traversal primitives will complain only if the lockdep expression is
+false and they are called from outside any RCU read-side critical section.
+
+For example, the workqueue for_each_pwq() macro is intended to be used
+either within an RCU read-side critical section or with wq->mutex held.
+It is thus implemented as follows::
+
+	#define for_each_pwq(pwq, wq)
+		list_for_each_entry_rcu((pwq), &(wq)->pwqs, pwqs_node,
+					lock_is_held(&(wq->mutex).dep_map))
diff --git a/Documentation/RCU/lockdep.txt b/Documentation/RCU/lockdep.txt
deleted file mode 100644
index 89db949e..0000000
--- a/Documentation/RCU/lockdep.txt
+++ /dev/null
@@ -1,112 +0,0 @@
-RCU and lockdep checking
-
-All flavors of RCU have lockdep checking available, so that lockdep is
-aware of when each task enters and leaves any flavor of RCU read-side
-critical section.  Each flavor of RCU is tracked separately (but note
-that this is not the case in 2.6.32 and earlier).  This allows lockdep's
-tracking to include RCU state, which can sometimes help when debugging
-deadlocks and the like.
-
-In addition, RCU provides the following primitives that check lockdep's
-state:
-
-	rcu_read_lock_held() for normal RCU.
-	rcu_read_lock_bh_held() for RCU-bh.
-	rcu_read_lock_sched_held() for RCU-sched.
-	srcu_read_lock_held() for SRCU.
-
-These functions are conservative, and will therefore return 1 if they
-aren't certain (for example, if CONFIG_DEBUG_LOCK_ALLOC is not set).
-This prevents things like WARN_ON(!rcu_read_lock_held()) from giving false
-positives when lockdep is disabled.
-
-In addition, a separate kernel config parameter CONFIG_PROVE_RCU enables
-checking of rcu_dereference() primitives:
-
-	rcu_dereference(p):
-		Check for RCU read-side critical section.
-	rcu_dereference_bh(p):
-		Check for RCU-bh read-side critical section.
-	rcu_dereference_sched(p):
-		Check for RCU-sched read-side critical section.
-	srcu_dereference(p, sp):
-		Check for SRCU read-side critical section.
-	rcu_dereference_check(p, c):
-		Use explicit check expression "c" along with
-		rcu_read_lock_held().  This is useful in code that is
-		invoked by both RCU readers and updaters.
-	rcu_dereference_bh_check(p, c):
-		Use explicit check expression "c" along with
-		rcu_read_lock_bh_held().  This is useful in code that
-		is invoked by both RCU-bh readers and updaters.
-	rcu_dereference_sched_check(p, c):
-		Use explicit check expression "c" along with
-		rcu_read_lock_sched_held().  This is useful in code that
-		is invoked by both RCU-sched readers and updaters.
-	srcu_dereference_check(p, c):
-		Use explicit check expression "c" along with
-		srcu_read_lock_held()().  This is useful in code that
-		is invoked by both SRCU readers and updaters.
-	rcu_dereference_raw(p):
-		Don't check.  (Use sparingly, if at all.)
-	rcu_dereference_protected(p, c):
-		Use explicit check expression "c", and omit all barriers
-		and compiler constraints.  This is useful when the data
-		structure cannot change, for example, in code that is
-		invoked only by updaters.
-	rcu_access_pointer(p):
-		Return the value of the pointer and omit all barriers,
-		but retain the compiler constraints that prevent duplicating
-		or coalescsing.  This is useful when when testing the
-		value of the pointer itself, for example, against NULL.
-
-The rcu_dereference_check() check expression can be any boolean
-expression, but would normally include a lockdep expression.  However,
-any boolean expression can be used.  For a moderately ornate example,
-consider the following:
-
-	file = rcu_dereference_check(fdt->fd[fd],
-				     lockdep_is_held(&files->file_lock) ||
-				     atomic_read(&files->count) == 1);
-
-This expression picks up the pointer "fdt->fd[fd]" in an RCU-safe manner,
-and, if CONFIG_PROVE_RCU is configured, verifies that this expression
-is used in:
-
-1.	An RCU read-side critical section (implicit), or
-2.	with files->file_lock held, or
-3.	on an unshared files_struct.
-
-In case (1), the pointer is picked up in an RCU-safe manner for vanilla
-RCU read-side critical sections, in case (2) the ->file_lock prevents
-any change from taking place, and finally, in case (3) the current task
-is the only task accessing the file_struct, again preventing any change
-from taking place.  If the above statement was invoked only from updater
-code, it could instead be written as follows:
-
-	file = rcu_dereference_protected(fdt->fd[fd],
-					 lockdep_is_held(&files->file_lock) ||
-					 atomic_read(&files->count) == 1);
-
-This would verify cases #2 and #3 above, and furthermore lockdep would
-complain if this was used in an RCU read-side critical section unless one
-of these two cases held.  Because rcu_dereference_protected() omits all
-barriers and compiler constraints, it generates better code than do the
-other flavors of rcu_dereference().  On the other hand, it is illegal
-to use rcu_dereference_protected() if either the RCU-protected pointer
-or the RCU-protected data that it points to can change concurrently.
-
-Like rcu_dereference(), when lockdep is enabled, RCU list and hlist
-traversal primitives check for being called from within an RCU read-side
-critical section.  However, a lockdep expression can be passed to them
-as a additional optional argument.  With this lockdep expression, these
-traversal primitives will complain only if the lockdep expression is
-false and they are called from outside any RCU read-side critical section.
-
-For example, the workqueue for_each_pwq() macro is intended to be used
-either within an RCU read-side critical section or with wq->mutex held.
-It is thus implemented as follows:
-
-	#define for_each_pwq(pwq, wq)
-		list_for_each_entry_rcu((pwq), &(wq)->pwqs, pwqs_node,
-					lock_is_held(&(wq->mutex).dep_map))
diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst
new file mode 100644
index 0000000..a9fc774
--- /dev/null
+++ b/Documentation/RCU/rculist_nulls.rst
@@ -0,0 +1,200 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=================================================
+Using RCU hlist_nulls to protect list and objects
+=================================================
+
+This section describes how to use hlist_nulls to
+protect read-mostly linked lists and
+objects using SLAB_TYPESAFE_BY_RCU allocations.
+
+Please read the basics in Documentation/RCU/listRCU.rst
+
+Using 'nulls'
+=============
+
+Using special makers (called 'nulls') is a convenient way
+to solve following problem :
+
+A typical RCU linked list managing objects which are
+allocated with SLAB_TYPESAFE_BY_RCU kmem_cache can
+use following algos :
+
+1) Lookup algo
+--------------
+
+::
+
+  rcu_read_lock()
+  begin:
+  obj = lockless_lookup(key);
+  if (obj) {
+    if (!try_get_ref(obj)) // might fail for free objects
+      goto begin;
+    /*
+    * Because a writer could delete object, and a writer could
+    * reuse these object before the RCU grace period, we
+    * must check key after getting the reference on object
+    */
+    if (obj->key != key) { // not the object we expected
+      put_ref(obj);
+      goto begin;
+    }
+  }
+  rcu_read_unlock();
+
+Beware that lockless_lookup(key) cannot use traditional hlist_for_each_entry_rcu()
+but a version with an additional memory barrier (smp_rmb())
+
+::
+
+  lockless_lookup(key)
+  {
+    struct hlist_node *node, *next;
+    for (pos = rcu_dereference((head)->first);
+        pos && ({ next = pos->next; smp_rmb(); prefetch(next); 1; }) &&
+        ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; });
+        pos = rcu_dereference(next))
+      if (obj->key == key)
+        return obj;
+    return NULL;
+  }
+
+And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb()::
+
+  struct hlist_node *node;
+  for (pos = rcu_dereference((head)->first);
+        pos && ({ prefetch(pos->next); 1; }) &&
+        ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; });
+        pos = rcu_dereference(pos->next))
+   if (obj->key == key)
+     return obj;
+  return NULL;
+
+Quoting Corey Minyard::
+
+  "If the object is moved from one list to another list in-between the
+  time the hash is calculated and the next field is accessed, and the
+  object has moved to the end of a new list, the traversal will not
+  complete properly on the list it should have, since the object will
+  be on the end of the new list and there's not a way to tell it's on a
+  new list and restart the list traversal. I think that this can be
+  solved by pre-fetching the "next" field (with proper barriers) before
+  checking the key."
+
+2) Insert algo
+--------------
+
+We need to make sure a reader cannot read the new 'obj->obj_next' value
+and previous value of 'obj->key'. Or else, an item could be deleted
+from a chain, and inserted into another chain. If new chain was empty
+before the move, 'next' pointer is NULL, and lockless reader can
+not detect it missed following items in original chain.
+
+::
+
+  /*
+  * Please note that new inserts are done at the head of list,
+  * not in the middle or end.
+  */
+  obj = kmem_cache_alloc(...);
+  lock_chain(); // typically a spin_lock()
+  obj->key = key;
+  /*
+  * we need to make sure obj->key is updated before obj->next
+  * or obj->refcnt
+  */
+  smp_wmb();
+  atomic_set(&obj->refcnt, 1);
+  hlist_add_head_rcu(&obj->obj_node, list);
+  unlock_chain(); // typically a spin_unlock()
+
+
+3) Remove algo
+--------------
+Nothing special here, we can use a standard RCU hlist deletion.
+But thanks to SLAB_TYPESAFE_BY_RCU, beware a deleted object can be reused
+very very fast (before the end of RCU grace period)
+
+::
+
+  if (put_last_reference_on(obj) {
+    lock_chain(); // typically a spin_lock()
+    hlist_del_init_rcu(&obj->obj_node);
+    unlock_chain(); // typically a spin_unlock()
+    kmem_cache_free(cachep, obj);
+  }
+
+
+
+--------------------------------------------------------------------------
+
+Avoiding extra smp_rmb()
+========================
+
+With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
+and extra smp_wmb() in insert function.
+
+For example, if we choose to store the slot number as the 'nulls'
+end-of-list marker for each slot of the hash table, we can detect
+a race (some writer did a delete and/or a move of an object
+to another chain) checking the final 'nulls' value if
+the lookup met the end of chain. If final 'nulls' value
+is not the slot number, then we must restart the lookup at
+the beginning. If the object was moved to the same chain,
+then the reader doesn't care : It might eventually
+scan the list again without harm.
+
+
+1) lookup algo
+--------------
+
+::
+
+  head = &table[slot];
+  rcu_read_lock();
+  begin:
+  hlist_nulls_for_each_entry_rcu(obj, node, head, member) {
+    if (obj->key == key) {
+      if (!try_get_ref(obj)) // might fail for free objects
+        goto begin;
+      if (obj->key != key) { // not the object we expected
+        put_ref(obj);
+        goto begin;
+      }
+    goto out;
+  }
+  /*
+  * if the nulls value we got at the end of this lookup is
+  * not the expected one, we must restart lookup.
+  * We probably met an item that was moved to another chain.
+  */
+  if (get_nulls_value(node) != slot)
+  goto begin;
+  obj = NULL;
+
+  out:
+  rcu_read_unlock();
+
+2) Insert function
+------------------
+
+::
+
+  /*
+  * Please note that new inserts are done at the head of list,
+  * not in the middle or end.
+  */
+  obj = kmem_cache_alloc(cachep);
+  lock_chain(); // typically a spin_lock()
+  obj->key = key;
+  /*
+  * changes to obj->key must be visible before refcnt one
+  */
+  smp_wmb();
+  atomic_set(&obj->refcnt, 1);
+  /*
+  * insert obj in RCU way (readers might be traversing chain)
+  */
+  hlist_nulls_add_head_rcu(&obj->obj_node, list);
+  unlock_chain(); // typically a spin_unlock()
diff --git a/Documentation/RCU/rculist_nulls.txt b/Documentation/RCU/rculist_nulls.txt
deleted file mode 100644
index 23f115d..0000000
--- a/Documentation/RCU/rculist_nulls.txt
+++ /dev/null
@@ -1,172 +0,0 @@
-Using hlist_nulls to protect read-mostly linked lists and
-objects using SLAB_TYPESAFE_BY_RCU allocations.
-
-Please read the basics in Documentation/RCU/listRCU.rst
-
-Using special makers (called 'nulls') is a convenient way
-to solve following problem :
-
-A typical RCU linked list managing objects which are
-allocated with SLAB_TYPESAFE_BY_RCU kmem_cache can
-use following algos :
-
-1) Lookup algo
---------------
-rcu_read_lock()
-begin:
-obj = lockless_lookup(key);
-if (obj) {
-  if (!try_get_ref(obj)) // might fail for free objects
-    goto begin;
-  /*
-   * Because a writer could delete object, and a writer could
-   * reuse these object before the RCU grace period, we
-   * must check key after getting the reference on object
-   */
-  if (obj->key != key) { // not the object we expected
-     put_ref(obj);
-     goto begin;
-   }
-}
-rcu_read_unlock();
-
-Beware that lockless_lookup(key) cannot use traditional hlist_for_each_entry_rcu()
-but a version with an additional memory barrier (smp_rmb())
-
-lockless_lookup(key)
-{
-   struct hlist_node *node, *next;
-   for (pos = rcu_dereference((head)->first);
-          pos && ({ next = pos->next; smp_rmb(); prefetch(next); 1; }) &&
-          ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; });
-          pos = rcu_dereference(next))
-      if (obj->key == key)
-         return obj;
-   return NULL;
-
-And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb() :
-
-   struct hlist_node *node;
-   for (pos = rcu_dereference((head)->first);
-		pos && ({ prefetch(pos->next); 1; }) &&
-		({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; });
-		pos = rcu_dereference(pos->next))
-      if (obj->key == key)
-         return obj;
-   return NULL;
-}
-
-Quoting Corey Minyard :
-
-"If the object is moved from one list to another list in-between the
- time the hash is calculated and the next field is accessed, and the
- object has moved to the end of a new list, the traversal will not
- complete properly on the list it should have, since the object will
- be on the end of the new list and there's not a way to tell it's on a
- new list and restart the list traversal.  I think that this can be
- solved by pre-fetching the "next" field (with proper barriers) before
- checking the key."
-
-2) Insert algo :
-----------------
-
-We need to make sure a reader cannot read the new 'obj->obj_next' value
-and previous value of 'obj->key'. Or else, an item could be deleted
-from a chain, and inserted into another chain. If new chain was empty
-before the move, 'next' pointer is NULL, and lockless reader can
-not detect it missed following items in original chain.
-
-/*
- * Please note that new inserts are done at the head of list,
- * not in the middle or end.
- */
-obj = kmem_cache_alloc(...);
-lock_chain(); // typically a spin_lock()
-obj->key = key;
-/*
- * we need to make sure obj->key is updated before obj->next
- * or obj->refcnt
- */
-smp_wmb();
-atomic_set(&obj->refcnt, 1);
-hlist_add_head_rcu(&obj->obj_node, list);
-unlock_chain(); // typically a spin_unlock()
-
-
-3) Remove algo
---------------
-Nothing special here, we can use a standard RCU hlist deletion.
-But thanks to SLAB_TYPESAFE_BY_RCU, beware a deleted object can be reused
-very very fast (before the end of RCU grace period)
-
-if (put_last_reference_on(obj) {
-   lock_chain(); // typically a spin_lock()
-   hlist_del_init_rcu(&obj->obj_node);
-   unlock_chain(); // typically a spin_unlock()
-   kmem_cache_free(cachep, obj);
-}
-
-
-
---------------------------------------------------------------------------
-With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
-and extra smp_wmb() in insert function.
-
-For example, if we choose to store the slot number as the 'nulls'
-end-of-list marker for each slot of the hash table, we can detect
-a race (some writer did a delete and/or a move of an object
-to another chain) checking the final 'nulls' value if
-the lookup met the end of chain. If final 'nulls' value
-is not the slot number, then we must restart the lookup at
-the beginning. If the object was moved to the same chain,
-then the reader doesn't care : It might eventually
-scan the list again without harm.
-
-
-1) lookup algo
-
- head = &table[slot];
- rcu_read_lock();
-begin:
- hlist_nulls_for_each_entry_rcu(obj, node, head, member) {
-   if (obj->key == key) {
-      if (!try_get_ref(obj)) // might fail for free objects
-         goto begin;
-      if (obj->key != key) { // not the object we expected
-         put_ref(obj);
-         goto begin;
-      }
-  goto out;
- }
-/*
- * if the nulls value we got at the end of this lookup is
- * not the expected one, we must restart lookup.
- * We probably met an item that was moved to another chain.
- */
- if (get_nulls_value(node) != slot)
-   goto begin;
- obj = NULL;
-
-out:
- rcu_read_unlock();
-
-2) Insert function :
---------------------
-
-/*
- * Please note that new inserts are done at the head of list,
- * not in the middle or end.
- */
-obj = kmem_cache_alloc(cachep);
-lock_chain(); // typically a spin_lock()
-obj->key = key;
-/*
- * changes to obj->key must be visible before refcnt one
- */
-smp_wmb();
-atomic_set(&obj->refcnt, 1);
-/*
- * insert obj in RCU way (readers might be traversing chain)
- */
-hlist_nulls_add_head_rcu(&obj->obj_node, list);
-unlock_chain(); // typically a spin_unlock()
diff --git a/Documentation/RCU/rcuref.rst b/Documentation/RCU/rcuref.rst
new file mode 100644
index 0000000..b33aeb1
--- /dev/null
+++ b/Documentation/RCU/rcuref.rst
@@ -0,0 +1,158 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+====================================================================
+Reference-count design for elements of lists/arrays protected by RCU
+====================================================================
+
+
+Please note that the percpu-ref feature is likely your first
+stop if you need to combine reference counts and RCU.  Please see
+include/linux/percpu-refcount.h for more information.  However, in
+those unusual cases where percpu-ref would consume too much memory,
+please read on.
+
+------------------------------------------------------------------------
+
+Reference counting on elements of lists which are protected by traditional
+reader/writer spinlocks or semaphores are straightforward:
+
+CODE LISTING A::
+
+    1.					    2.
+    add()				    search_and_reference()
+    {					    {
+	alloc_object				read_lock(&list_lock);
+	...					search_for_element
+	atomic_set(&el->rc, 1);			atomic_inc(&el->rc);
+	write_lock(&list_lock);			 ...
+	add_element				read_unlock(&list_lock);
+	...					...
+	write_unlock(&list_lock);	   }
+    }
+
+    3.					    4.
+    release_referenced()		    delete()
+    {					    {
+	...					write_lock(&list_lock);
+	if(atomic_dec_and_test(&el->rc))	...
+	    kfree(el);
+	...					remove_element
+    }						write_unlock(&list_lock);
+						...
+						if (atomic_dec_and_test(&el->rc))
+						    kfree(el);
+						...
+					    }
+
+If this list/array is made lock free using RCU as in changing the
+write_lock() in add() and delete() to spin_lock() and changing read_lock()
+in search_and_reference() to rcu_read_lock(), the atomic_inc() in
+search_and_reference() could potentially hold reference to an element which
+has already been deleted from the list/array.  Use atomic_inc_not_zero()
+in this scenario as follows:
+
+CODE LISTING B::
+
+    1.					    2.
+    add()				    search_and_reference()
+    {					    {
+	alloc_object				rcu_read_lock();
+	...					search_for_element
+	atomic_set(&el->rc, 1);			if (!atomic_inc_not_zero(&el->rc)) {
+	spin_lock(&list_lock);			    rcu_read_unlock();
+						    return FAIL;
+	add_element				}
+	...					...
+	spin_unlock(&list_lock);		rcu_read_unlock();
+    }					    }
+    3.					    4.
+    release_referenced()		    delete()
+    {					    {
+	...					spin_lock(&list_lock);
+	if (atomic_dec_and_test(&el->rc))	...
+	    call_rcu(&el->head, el_free);	remove_element
+	...					spin_unlock(&list_lock);
+    }						...
+						if (atomic_dec_and_test(&el->rc))
+						    call_rcu(&el->head, el_free);
+						...
+					    }
+
+Sometimes, a reference to the element needs to be obtained in the
+update (write) stream.	In such cases, atomic_inc_not_zero() might be
+overkill, since we hold the update-side spinlock.  One might instead
+use atomic_inc() in such cases.
+
+It is not always convenient to deal with "FAIL" in the
+search_and_reference() code path.  In such cases, the
+atomic_dec_and_test() may be moved from delete() to el_free()
+as follows:
+
+CODE LISTING C::
+
+    1.					    2.
+    add()				    search_and_reference()
+    {					    {
+	alloc_object				rcu_read_lock();
+	...					search_for_element
+	atomic_set(&el->rc, 1);			atomic_inc(&el->rc);
+	spin_lock(&list_lock);			...
+
+	add_element				rcu_read_unlock();
+	...				    }
+	spin_unlock(&list_lock);	    4.
+    }					    delete()
+    3.					    {
+    release_referenced()			spin_lock(&list_lock);
+    {						...
+	...					remove_element
+	if (atomic_dec_and_test(&el->rc))	spin_unlock(&list_lock);
+	    kfree(el);				...
+	...					call_rcu(&el->head, el_free);
+    }						...
+    5.					    }
+    void el_free(struct rcu_head *rhp)
+    {
+	release_referenced();
+    }
+
+The key point is that the initial reference added by add() is not removed
+until after a grace period has elapsed following removal.  This means that
+search_and_reference() cannot find this element, which means that the value
+of el->rc cannot increase.  Thus, once it reaches zero, there are no
+readers that can or ever will be able to reference the element.	 The
+element can therefore safely be freed.	This in turn guarantees that if
+any reader finds the element, that reader may safely acquire a reference
+without checking the value of the reference counter.
+
+A clear advantage of the RCU-based pattern in listing C over the one
+in listing B is that any call to search_and_reference() that locates
+a given object will succeed in obtaining a reference to that object,
+even given a concurrent invocation of delete() for that same object.
+Similarly, a clear advantage of both listings B and C over listing A is
+that a call to delete() is not delayed even if there are an arbitrarily
+large number of calls to search_and_reference() searching for the same
+object that delete() was invoked on.  Instead, all that is delayed is
+the eventual invocation of kfree(), which is usually not a problem on
+modern computer systems, even the small ones.
+
+In cases where delete() can sleep, synchronize_rcu() can be called from
+delete(), so that el_free() can be subsumed into delete as follows::
+
+    4.
+    delete()
+    {
+	spin_lock(&list_lock);
+	...
+	remove_element
+	spin_unlock(&list_lock);
+	...
+	synchronize_rcu();
+	if (atomic_dec_and_test(&el->rc))
+	    kfree(el);
+	...
+    }
+
+As additional examples in the kernel, the pattern in listing C is used by
+reference counting of struct pid, while the pattern in listing B is used by
+struct posix_acl.
diff --git a/Documentation/RCU/rcuref.txt b/Documentation/RCU/rcuref.txt
deleted file mode 100644
index 5e6429d6..0000000
--- a/Documentation/RCU/rcuref.txt
+++ /dev/null
@@ -1,151 +0,0 @@
-Reference-count design for elements of lists/arrays protected by RCU.
-
-
-Please note that the percpu-ref feature is likely your first
-stop if you need to combine reference counts and RCU.  Please see
-include/linux/percpu-refcount.h for more information.  However, in
-those unusual cases where percpu-ref would consume too much memory,
-please read on.
-
-------------------------------------------------------------------------
-
-Reference counting on elements of lists which are protected by traditional
-reader/writer spinlocks or semaphores are straightforward:
-
-CODE LISTING A:
-1.				2.
-add()				search_and_reference()
-{				{
-    alloc_object		    read_lock(&list_lock);
-    ...				    search_for_element
-    atomic_set(&el->rc, 1);	    atomic_inc(&el->rc);
-    write_lock(&list_lock);	     ...
-    add_element			    read_unlock(&list_lock);
-    ...				    ...
-    write_unlock(&list_lock);	}
-}
-
-3.					4.
-release_referenced()			delete()
-{					{
-    ...					    write_lock(&list_lock);
-    if(atomic_dec_and_test(&el->rc))	    ...
-	kfree(el);
-    ...					    remove_element
-}					    write_unlock(&list_lock);
- 					    ...
-					    if (atomic_dec_and_test(&el->rc))
-					        kfree(el);
-					    ...
-					}
-
-If this list/array is made lock free using RCU as in changing the
-write_lock() in add() and delete() to spin_lock() and changing read_lock()
-in search_and_reference() to rcu_read_lock(), the atomic_inc() in
-search_and_reference() could potentially hold reference to an element which
-has already been deleted from the list/array.  Use atomic_inc_not_zero()
-in this scenario as follows:
-
-CODE LISTING B:
-1.					2.
-add()					search_and_reference()
-{					{
-    alloc_object			    rcu_read_lock();
-    ...					    search_for_element
-    atomic_set(&el->rc, 1);		    if (!atomic_inc_not_zero(&el->rc)) {
-    spin_lock(&list_lock);		        rcu_read_unlock();
-					        return FAIL;
-    add_element				    }
-    ...					    ...
-    spin_unlock(&list_lock);		    rcu_read_unlock();
-}					}
-3.					4.
-release_referenced()			delete()
-{					{
-    ...					    spin_lock(&list_lock);
-    if (atomic_dec_and_test(&el->rc))       ...
-        call_rcu(&el->head, el_free);       remove_element
-    ...                                     spin_unlock(&list_lock);
-} 					    ...
-					    if (atomic_dec_and_test(&el->rc))
-					        call_rcu(&el->head, el_free);
-					    ...
-					}
-
-Sometimes, a reference to the element needs to be obtained in the
-update (write) stream.  In such cases, atomic_inc_not_zero() might be
-overkill, since we hold the update-side spinlock.  One might instead
-use atomic_inc() in such cases.
-
-It is not always convenient to deal with "FAIL" in the
-search_and_reference() code path.  In such cases, the
-atomic_dec_and_test() may be moved from delete() to el_free()
-as follows:
-
-CODE LISTING C:
-1.					2.
-add()					search_and_reference()
-{					{
-    alloc_object			    rcu_read_lock();
-    ...					    search_for_element
-    atomic_set(&el->rc, 1);		    atomic_inc(&el->rc);
-    spin_lock(&list_lock);		    ...
-
-    add_element				    rcu_read_unlock();
-    ...					}
-    spin_unlock(&list_lock);		4.
-}					delete()
-3.					{
-release_referenced()			    spin_lock(&list_lock);
-{					    ...
-    ...					    remove_element
-    if (atomic_dec_and_test(&el->rc))       spin_unlock(&list_lock);
-        kfree(el);			    ...
-    ...                                     call_rcu(&el->head, el_free);
-} 					    ...
-5.					}
-void el_free(struct rcu_head *rhp)
-{
-    release_referenced();
-}
-
-The key point is that the initial reference added by add() is not removed
-until after a grace period has elapsed following removal.  This means that
-search_and_reference() cannot find this element, which means that the value
-of el->rc cannot increase.  Thus, once it reaches zero, there are no
-readers that can or ever will be able to reference the element.  The
-element can therefore safely be freed.  This in turn guarantees that if
-any reader finds the element, that reader may safely acquire a reference
-without checking the value of the reference counter.
-
-A clear advantage of the RCU-based pattern in listing C over the one
-in listing B is that any call to search_and_reference() that locates
-a given object will succeed in obtaining a reference to that object,
-even given a concurrent invocation of delete() for that same object.
-Similarly, a clear advantage of both listings B and C over listing A is
-that a call to delete() is not delayed even if there are an arbitrarily
-large number of calls to search_and_reference() searching for the same
-object that delete() was invoked on.  Instead, all that is delayed is
-the eventual invocation of kfree(), which is usually not a problem on
-modern computer systems, even the small ones.
-
-In cases where delete() can sleep, synchronize_rcu() can be called from
-delete(), so that el_free() can be subsumed into delete as follows:
-
-4.
-delete()
-{
-    spin_lock(&list_lock);
-    ...
-    remove_element
-    spin_unlock(&list_lock);
-    ...
-    synchronize_rcu();
-    if (atomic_dec_and_test(&el->rc))
-    	kfree(el);
-    ...
-}
-
-As additional examples in the kernel, the pattern in listing C is used by
-reference counting of struct pid, while the pattern in listing B is used by
-struct posix_acl.
diff --git a/Documentation/RCU/stallwarn.rst b/Documentation/RCU/stallwarn.rst
new file mode 100644
index 0000000..c9ab6af
--- /dev/null
+++ b/Documentation/RCU/stallwarn.rst
@@ -0,0 +1,336 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==============================
+Using RCU's CPU Stall Detector
+==============================
+
+This document first discusses what sorts of issues RCU's CPU stall
+detector can locate, and then discusses kernel parameters and Kconfig
+options that can be used to fine-tune the detector's operation.  Finally,
+this document explains the stall detector's "splat" format.
+
+
+What Causes RCU CPU Stall Warnings?
+===================================
+
+So your kernel printed an RCU CPU stall warning.  The next question is
+"What caused it?"  The following problems can result in RCU CPU stall
+warnings:
+
+-	A CPU looping in an RCU read-side critical section.
+
+-	A CPU looping with interrupts disabled.
+
+-	A CPU looping with preemption disabled.
+
+-	A CPU looping with bottom halves disabled.
+
+-	For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel
+	without invoking schedule().  If the looping in the kernel is
+	really expected and desirable behavior, you might need to add
+	some calls to cond_resched().
+
+-	Booting Linux using a console connection that is too slow to
+	keep up with the boot-time console-message rate.  For example,
+	a 115Kbaud serial console can be -way- too slow to keep up
+	with boot-time message rates, and will frequently result in
+	RCU CPU stall warning messages.  Especially if you have added
+	debug printk()s.
+
+-	Anything that prevents RCU's grace-period kthreads from running.
+	This can result in the "All QSes seen" console-log message.
+	This message will include information on when the kthread last
+	ran and how often it should be expected to run.  It can also
+	result in the ``rcu_.*kthread starved for`` console-log message,
+	which will include additional debugging information.
+
+-	A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might
+	happen to preempt a low-priority task in the middle of an RCU
+	read-side critical section.   This is especially damaging if
+	that low-priority task is not permitted to run on any other CPU,
+	in which case the next RCU grace period can never complete, which
+	will eventually cause the system to run out of memory and hang.
+	While the system is in the process of running itself out of
+	memory, you might see stall-warning messages.
+
+-	A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
+	is running at a higher priority than the RCU softirq threads.
+	This will prevent RCU callbacks from ever being invoked,
+	and in a CONFIG_PREEMPT_RCU kernel will further prevent
+	RCU grace periods from ever completing.  Either way, the
+	system will eventually run out of memory and hang.  In the
+	CONFIG_PREEMPT_RCU case, you might see stall-warning
+	messages.
+
+	You can use the rcutree.kthread_prio kernel boot parameter to
+	increase the scheduling priority of RCU's kthreads, which can
+	help avoid this problem.  However, please note that doing this
+	can increase your system's context-switch rate and thus degrade
+	performance.
+
+-	A periodic interrupt whose handler takes longer than the time
+	interval between successive pairs of interrupts.  This can
+	prevent RCU's kthreads and softirq handlers from running.
+	Note that certain high-overhead debugging options, for example
+	the function_graph tracer, can result in interrupt handler taking
+	considerably longer than normal, which can in turn result in
+	RCU CPU stall warnings.
+
+-	Testing a workload on a fast system, tuning the stall-warning
+	timeout down to just barely avoid RCU CPU stall warnings, and then
+	running the same workload with the same stall-warning timeout on a
+	slow system.  Note that thermal throttling and on-demand governors
+	can cause a single system to be sometimes fast and sometimes slow!
+
+-	A hardware or software issue shuts off the scheduler-clock
+	interrupt on a CPU that is not in dyntick-idle mode.  This
+	problem really has happened, and seems to be most likely to
+	result in RCU CPU stall warnings for CONFIG_NO_HZ_COMMON=n kernels.
+
+-	A hardware or software issue that prevents time-based wakeups
+	from occurring.  These issues can range from misconfigured or
+	buggy timer hardware through bugs in the interrupt or exception
+	path (whether hardware, firmware, or software) through bugs
+	in Linux's timer subsystem through bugs in the scheduler, and,
+	yes, even including bugs in RCU itself.
+
+-	A bug in the RCU implementation.
+
+-	A hardware failure.  This is quite unlikely, but has occurred
+	at least once in real life.  A CPU failed in a running system,
+	becoming unresponsive, but not causing an immediate crash.
+	This resulted in a series of RCU CPU stall warnings, eventually
+	leading the realization that the CPU had failed.
+
+The RCU, RCU-sched, and RCU-tasks implementations have CPU stall warning.
+Note that SRCU does -not- have CPU stall warnings.  Please note that
+RCU only detects CPU stalls when there is a grace period in progress.
+No grace period, no CPU stall warnings.
+
+To diagnose the cause of the stall, inspect the stack traces.
+The offending function will usually be near the top of the stack.
+If you have a series of stall warnings from a single extended stall,
+comparing the stack traces can often help determine where the stall
+is occurring, which will usually be in the function nearest the top of
+that portion of the stack which remains the same from trace to trace.
+If you can reliably trigger the stall, ftrace can be quite helpful.
+
+RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE
+and with RCU's event tracing.  For information on RCU's event tracing,
+see include/trace/events/rcu.h.
+
+
+Fine-Tuning the RCU CPU Stall Detector
+======================================
+
+The rcuupdate.rcu_cpu_stall_suppress module parameter disables RCU's
+CPU stall detector, which detects conditions that unduly delay RCU grace
+periods.  This module parameter enables CPU stall detection by default,
+but may be overridden via boot-time parameter or at runtime via sysfs.
+The stall detector's idea of what constitutes "unduly delayed" is
+controlled by a set of kernel configuration variables and cpp macros:
+
+CONFIG_RCU_CPU_STALL_TIMEOUT
+----------------------------
+
+	This kernel configuration parameter defines the period of time
+	that RCU will wait from the beginning of a grace period until it
+	issues an RCU CPU stall warning.  This time period is normally
+	21 seconds.
+
+	This configuration parameter may be changed at runtime via the
+	/sys/module/rcupdate/parameters/rcu_cpu_stall_timeout, however
+	this parameter is checked only at the beginning of a cycle.
+	So if you are 10 seconds into a 40-second stall, setting this
+	sysfs parameter to (say) five will shorten the timeout for the
+	-next- stall, or the following warning for the current stall
+	(assuming the stall lasts long enough).  It will not affect the
+	timing of the next warning for the current stall.
+
+	Stall-warning messages may be enabled and disabled completely via
+	/sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
+
+RCU_STALL_DELAY_DELTA
+---------------------
+
+	Although the lockdep facility is extremely useful, it does add
+	some overhead.  Therefore, under CONFIG_PROVE_RCU, the
+	RCU_STALL_DELAY_DELTA macro allows five extra seconds before
+	giving an RCU CPU stall warning message.  (This is a cpp
+	macro, not a kernel configuration parameter.)
+
+RCU_STALL_RAT_DELAY
+-------------------
+
+	The CPU stall detector tries to make the offending CPU print its
+	own warnings, as this often gives better-quality stack traces.
+	However, if the offending CPU does not detect its own stall in
+	the number of jiffies specified by RCU_STALL_RAT_DELAY, then
+	some other CPU will complain.  This delay is normally set to
+	two jiffies.  (This is a cpp macro, not a kernel configuration
+	parameter.)
+
+rcupdate.rcu_task_stall_timeout
+-------------------------------
+
+	This boot/sysfs parameter controls the RCU-tasks stall warning
+	interval.  A value of zero or less suppresses RCU-tasks stall
+	warnings.  A positive value sets the stall-warning interval
+	in seconds.  An RCU-tasks stall warning starts with the line:
+
+		INFO: rcu_tasks detected stalls on tasks:
+
+	And continues with the output of sched_show_task() for each
+	task stalling the current RCU-tasks grace period.
+
+
+Interpreting RCU's CPU Stall-Detector "Splats"
+==============================================
+
+For non-RCU-tasks flavors of RCU, when a CPU detects that it is stalling,
+it will print a message similar to the following::
+
+	INFO: rcu_sched detected stalls on CPUs/tasks:
+	2-...: (3 GPs behind) idle=06c/0/0 softirq=1453/1455 fqs=0
+	16-...: (0 ticks this GP) idle=81c/0/0 softirq=764/764 fqs=0
+	(detected by 32, t=2603 jiffies, g=7075, q=625)
+
+This message indicates that CPU 32 detected that CPUs 2 and 16 were both
+causing stalls, and that the stall was affecting RCU-sched.  This message
+will normally be followed by stack dumps for each CPU.  Please note that
+PREEMPT_RCU builds can be stalled by tasks as well as by CPUs, and that
+the tasks will be indicated by PID, for example, "P3421".  It is even
+possible for an rcu_state stall to be caused by both CPUs -and- tasks,
+in which case the offending CPUs and tasks will all be called out in the list.
+
+CPU 2's "(3 GPs behind)" indicates that this CPU has not interacted with
+the RCU core for the past three grace periods.  In contrast, CPU 16's "(0
+ticks this GP)" indicates that this CPU has not taken any scheduling-clock
+interrupts during the current stalled grace period.
+
+The "idle=" portion of the message prints the dyntick-idle state.
+The hex number before the first "/" is the low-order 12 bits of the
+dynticks counter, which will have an even-numbered value if the CPU
+is in dyntick-idle mode and an odd-numbered value otherwise.  The hex
+number between the two "/"s is the value of the nesting, which will be
+a small non-negative number if in the idle loop (as shown above) and a
+very large positive number otherwise.
+
+The "softirq=" portion of the message tracks the number of RCU softirq
+handlers that the stalled CPU has executed.  The number before the "/"
+is the number that had executed since boot at the time that this CPU
+last noted the beginning of a grace period, which might be the current
+(stalled) grace period, or it might be some earlier grace period (for
+example, if the CPU might have been in dyntick-idle mode for an extended
+time period.  The number after the "/" is the number that have executed
+since boot until the current time.  If this latter number stays constant
+across repeated stall-warning messages, it is possible that RCU's softirq
+handlers are no longer able to execute on this CPU.  This can happen if
+the stalled CPU is spinning with interrupts are disabled, or, in -rt
+kernels, if a high-priority process is starving RCU's softirq handler.
+
+The "fqs=" shows the number of force-quiescent-state idle/offline
+detection passes that the grace-period kthread has made across this
+CPU since the last time that this CPU noted the beginning of a grace
+period.
+
+The "detected by" line indicates which CPU detected the stall (in this
+case, CPU 32), how many jiffies have elapsed since the start of the grace
+period (in this case 2603), the grace-period sequence number (7075), and
+an estimate of the total number of RCU callbacks queued across all CPUs
+(625 in this case).
+
+In kernels with CONFIG_RCU_FAST_NO_HZ, more information is printed
+for each CPU::
+
+	0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 softirq=82/543 last_accelerate: a345/d342 dyntick_enabled: 1
+
+The "last_accelerate:" prints the low-order 16 bits (in hex) of the
+jiffies counter when this CPU last invoked rcu_try_advance_all_cbs()
+from rcu_needs_cpu() or last invoked rcu_accelerate_cbs() from
+rcu_prepare_for_idle(). "dyntick_enabled: 1" indicates that dyntick-idle
+processing is enabled.
+
+If the grace period ends just as the stall warning starts printing,
+there will be a spurious stall-warning message, which will include
+the following::
+
+	INFO: Stall ended before state dump start
+
+This is rare, but does happen from time to time in real life.  It is also
+possible for a zero-jiffy stall to be flagged in this case, depending
+on how the stall warning and the grace-period initialization happen to
+interact.  Please note that it is not possible to entirely eliminate this
+sort of false positive without resorting to things like stop_machine(),
+which is overkill for this sort of problem.
+
+If all CPUs and tasks have passed through quiescent states, but the
+grace period has nevertheless failed to end, the stall-warning splat
+will include something like the following::
+
+	All QSes seen, last rcu_preempt kthread activity 23807 (4297905177-4297881370), jiffies_till_next_fqs=3, root ->qsmask 0x0
+
+The "23807" indicates that it has been more than 23 thousand jiffies
+since the grace-period kthread ran.  The "jiffies_till_next_fqs"
+indicates how frequently that kthread should run, giving the number
+of jiffies between force-quiescent-state scans, in this case three,
+which is way less than 23807.  Finally, the root rcu_node structure's
+->qsmask field is printed, which will normally be zero.
+
+If the relevant grace-period kthread has been unable to run prior to
+the stall warning, as was the case in the "All QSes seen" line above,
+the following additional line is printed::
+
+	kthread starved for 23807 jiffies! g7075 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1 ->cpu=5
+
+Starving the grace-period kthreads of CPU time can of course result
+in RCU CPU stall warnings even when all CPUs and tasks have passed
+through the required quiescent states.  The "g" number shows the current
+grace-period sequence number, the "f" precedes the ->gp_flags command
+to the grace-period kthread, the "RCU_GP_WAIT_FQS" indicates that the
+kthread is waiting for a short timeout, the "state" precedes value of the
+task_struct ->state field, and the "cpu" indicates that the grace-period
+kthread last ran on CPU 5.
+
+
+Multiple Warnings From One Stall
+================================
+
+If a stall lasts long enough, multiple stall-warning messages will be
+printed for it.  The second and subsequent messages are printed at
+longer intervals, so that the time between (say) the first and second
+message will be about three times the interval between the beginning
+of the stall and the first message.
+
+
+Stall Warnings for Expedited Grace Periods
+==========================================
+
+If an expedited grace period detects a stall, it will place a message
+like the following in dmesg::
+
+	INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 7-... } 21119 jiffies s: 73 root: 0x2/.
+
+This indicates that CPU 7 has failed to respond to a reschedule IPI.
+The three periods (".") following the CPU number indicate that the CPU
+is online (otherwise the first period would instead have been "O"),
+that the CPU was online at the beginning of the expedited grace period
+(otherwise the second period would have instead been "o"), and that
+the CPU has been online at least once since boot (otherwise, the third
+period would instead have been "N").  The number before the "jiffies"
+indicates that the expedited grace period has been going on for 21,119
+jiffies.  The number following the "s:" indicates that the expedited
+grace-period sequence counter is 73.  The fact that this last value is
+odd indicates that an expedited grace period is in flight.  The number
+following "root:" is a bitmask that indicates which children of the root
+rcu_node structure correspond to CPUs and/or tasks that are blocking the
+current expedited grace period.  If the tree had more than one level,
+additional hex numbers would be printed for the states of the other
+rcu_node structures in the tree.
+
+As with normal grace periods, PREEMPT_RCU builds can be stalled by
+tasks as well as by CPUs, and that the tasks will be indicated by PID,
+for example, "P3421".
+
+It is entirely possible to see stall warnings from normal and from
+expedited grace periods at about the same time during the same run.
diff --git a/Documentation/RCU/stallwarn.txt b/Documentation/RCU/stallwarn.txt
deleted file mode 100644
index a360a87..0000000
--- a/Documentation/RCU/stallwarn.txt
+++ /dev/null
@@ -1,316 +0,0 @@
-Using RCU's CPU Stall Detector
-
-This document first discusses what sorts of issues RCU's CPU stall
-detector can locate, and then discusses kernel parameters and Kconfig
-options that can be used to fine-tune the detector's operation.  Finally,
-this document explains the stall detector's "splat" format.
-
-
-What Causes RCU CPU Stall Warnings?
-
-So your kernel printed an RCU CPU stall warning.  The next question is
-"What caused it?"  The following problems can result in RCU CPU stall
-warnings:
-
-o	A CPU looping in an RCU read-side critical section.
-
-o	A CPU looping with interrupts disabled.
-
-o	A CPU looping with preemption disabled.
-
-o	A CPU looping with bottom halves disabled.
-
-o	For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel
-	without invoking schedule().  If the looping in the kernel is
-	really expected and desirable behavior, you might need to add
-	some calls to cond_resched().
-
-o	Booting Linux using a console connection that is too slow to
-	keep up with the boot-time console-message rate.  For example,
-	a 115Kbaud serial console can be -way- too slow to keep up
-	with boot-time message rates, and will frequently result in
-	RCU CPU stall warning messages.  Especially if you have added
-	debug printk()s.
-
-o	Anything that prevents RCU's grace-period kthreads from running.
-	This can result in the "All QSes seen" console-log message.
-	This message will include information on when the kthread last
-	ran and how often it should be expected to run.  It can also
-	result in the "rcu_.*kthread starved for" console-log message,
-	which will include additional debugging information.
-
-o	A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might
-	happen to preempt a low-priority task in the middle of an RCU
-	read-side critical section.   This is especially damaging if
-	that low-priority task is not permitted to run on any other CPU,
-	in which case the next RCU grace period can never complete, which
-	will eventually cause the system to run out of memory and hang.
-	While the system is in the process of running itself out of
-	memory, you might see stall-warning messages.
-
-o	A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that
-	is running at a higher priority than the RCU softirq threads.
-	This will prevent RCU callbacks from ever being invoked,
-	and in a CONFIG_PREEMPT_RCU kernel will further prevent
-	RCU grace periods from ever completing.  Either way, the
-	system will eventually run out of memory and hang.  In the
-	CONFIG_PREEMPT_RCU case, you might see stall-warning
-	messages.
-
-	You can use the rcutree.kthread_prio kernel boot parameter to
-	increase the scheduling priority of RCU's kthreads, which can
-	help avoid this problem.  However, please note that doing this
-	can increase your system's context-switch rate and thus degrade
-	performance.
-
-o	A periodic interrupt whose handler takes longer than the time
-	interval between successive pairs of interrupts.  This can
-	prevent RCU's kthreads and softirq handlers from running.
-	Note that certain high-overhead debugging options, for example
-	the function_graph tracer, can result in interrupt handler taking
-	considerably longer than normal, which can in turn result in
-	RCU CPU stall warnings.
-
-o	Testing a workload on a fast system, tuning the stall-warning
-	timeout down to just barely avoid RCU CPU stall warnings, and then
-	running the same workload with the same stall-warning timeout on a
-	slow system.  Note that thermal throttling and on-demand governors
-	can cause a single system to be sometimes fast and sometimes slow!
-
-o	A hardware or software issue shuts off the scheduler-clock
-	interrupt on a CPU that is not in dyntick-idle mode.  This
-	problem really has happened, and seems to be most likely to
-	result in RCU CPU stall warnings for CONFIG_NO_HZ_COMMON=n kernels.
-
-o	A bug in the RCU implementation.
-
-o	A hardware failure.  This is quite unlikely, but has occurred
-	at least once in real life.  A CPU failed in a running system,
-	becoming unresponsive, but not causing an immediate crash.
-	This resulted in a series of RCU CPU stall warnings, eventually
-	leading the realization that the CPU had failed.
-
-The RCU, RCU-sched, and RCU-tasks implementations have CPU stall warning.
-Note that SRCU does -not- have CPU stall warnings.  Please note that
-RCU only detects CPU stalls when there is a grace period in progress.
-No grace period, no CPU stall warnings.
-
-To diagnose the cause of the stall, inspect the stack traces.
-The offending function will usually be near the top of the stack.
-If you have a series of stall warnings from a single extended stall,
-comparing the stack traces can often help determine where the stall
-is occurring, which will usually be in the function nearest the top of
-that portion of the stack which remains the same from trace to trace.
-If you can reliably trigger the stall, ftrace can be quite helpful.
-
-RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE
-and with RCU's event tracing.  For information on RCU's event tracing,
-see include/trace/events/rcu.h.
-
-
-Fine-Tuning the RCU CPU Stall Detector
-
-The rcuupdate.rcu_cpu_stall_suppress module parameter disables RCU's
-CPU stall detector, which detects conditions that unduly delay RCU grace
-periods.  This module parameter enables CPU stall detection by default,
-but may be overridden via boot-time parameter or at runtime via sysfs.
-The stall detector's idea of what constitutes "unduly delayed" is
-controlled by a set of kernel configuration variables and cpp macros:
-
-CONFIG_RCU_CPU_STALL_TIMEOUT
-
-	This kernel configuration parameter defines the period of time
-	that RCU will wait from the beginning of a grace period until it
-	issues an RCU CPU stall warning.  This time period is normally
-	21 seconds.
-
-	This configuration parameter may be changed at runtime via the
-	/sys/module/rcupdate/parameters/rcu_cpu_stall_timeout, however
-	this parameter is checked only at the beginning of a cycle.
-	So if you are 10 seconds into a 40-second stall, setting this
-	sysfs parameter to (say) five will shorten the timeout for the
-	-next- stall, or the following warning for the current stall
-	(assuming the stall lasts long enough).  It will not affect the
-	timing of the next warning for the current stall.
-
-	Stall-warning messages may be enabled and disabled completely via
-	/sys/module/rcupdate/parameters/rcu_cpu_stall_suppress.
-
-RCU_STALL_DELAY_DELTA
-
-	Although the lockdep facility is extremely useful, it does add
-	some overhead.  Therefore, under CONFIG_PROVE_RCU, the
-	RCU_STALL_DELAY_DELTA macro allows five extra seconds before
-	giving an RCU CPU stall warning message.  (This is a cpp
-	macro, not a kernel configuration parameter.)
-
-RCU_STALL_RAT_DELAY
-
-	The CPU stall detector tries to make the offending CPU print its
-	own warnings, as this often gives better-quality stack traces.
-	However, if the offending CPU does not detect its own stall in
-	the number of jiffies specified by RCU_STALL_RAT_DELAY, then
-	some other CPU will complain.  This delay is normally set to
-	two jiffies.  (This is a cpp macro, not a kernel configuration
-	parameter.)
-
-rcupdate.rcu_task_stall_timeout
-
-	This boot/sysfs parameter controls the RCU-tasks stall warning
-	interval.  A value of zero or less suppresses RCU-tasks stall
-	warnings.  A positive value sets the stall-warning interval
-	in seconds.  An RCU-tasks stall warning starts with the line:
-
-		INFO: rcu_tasks detected stalls on tasks:
-
-	And continues with the output of sched_show_task() for each
-	task stalling the current RCU-tasks grace period.
-
-
-Interpreting RCU's CPU Stall-Detector "Splats"
-
-For non-RCU-tasks flavors of RCU, when a CPU detects that it is stalling,
-it will print a message similar to the following:
-
-	INFO: rcu_sched detected stalls on CPUs/tasks:
-	2-...: (3 GPs behind) idle=06c/0/0 softirq=1453/1455 fqs=0
-	16-...: (0 ticks this GP) idle=81c/0/0 softirq=764/764 fqs=0
-	(detected by 32, t=2603 jiffies, g=7075, q=625)
-
-This message indicates that CPU 32 detected that CPUs 2 and 16 were both
-causing stalls, and that the stall was affecting RCU-sched.  This message
-will normally be followed by stack dumps for each CPU.  Please note that
-PREEMPT_RCU builds can be stalled by tasks as well as by CPUs, and that
-the tasks will be indicated by PID, for example, "P3421".  It is even
-possible for an rcu_state stall to be caused by both CPUs -and- tasks,
-in which case the offending CPUs and tasks will all be called out in the list.
-
-CPU 2's "(3 GPs behind)" indicates that this CPU has not interacted with
-the RCU core for the past three grace periods.  In contrast, CPU 16's "(0
-ticks this GP)" indicates that this CPU has not taken any scheduling-clock
-interrupts during the current stalled grace period.
-
-The "idle=" portion of the message prints the dyntick-idle state.
-The hex number before the first "/" is the low-order 12 bits of the
-dynticks counter, which will have an even-numbered value if the CPU
-is in dyntick-idle mode and an odd-numbered value otherwise.  The hex
-number between the two "/"s is the value of the nesting, which will be
-a small non-negative number if in the idle loop (as shown above) and a
-very large positive number otherwise.
-
-The "softirq=" portion of the message tracks the number of RCU softirq
-handlers that the stalled CPU has executed.  The number before the "/"
-is the number that had executed since boot at the time that this CPU
-last noted the beginning of a grace period, which might be the current
-(stalled) grace period, or it might be some earlier grace period (for
-example, if the CPU might have been in dyntick-idle mode for an extended
-time period.  The number after the "/" is the number that have executed
-since boot until the current time.  If this latter number stays constant
-across repeated stall-warning messages, it is possible that RCU's softirq
-handlers are no longer able to execute on this CPU.  This can happen if
-the stalled CPU is spinning with interrupts are disabled, or, in -rt
-kernels, if a high-priority process is starving RCU's softirq handler.
-
-The "fqs=" shows the number of force-quiescent-state idle/offline
-detection passes that the grace-period kthread has made across this
-CPU since the last time that this CPU noted the beginning of a grace
-period.
-
-The "detected by" line indicates which CPU detected the stall (in this
-case, CPU 32), how many jiffies have elapsed since the start of the grace
-period (in this case 2603), the grace-period sequence number (7075), and
-an estimate of the total number of RCU callbacks queued across all CPUs
-(625 in this case).
-
-In kernels with CONFIG_RCU_FAST_NO_HZ, more information is printed
-for each CPU:
-
-	0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 softirq=82/543 last_accelerate: a345/d342 dyntick_enabled: 1
-
-The "last_accelerate:" prints the low-order 16 bits (in hex) of the
-jiffies counter when this CPU last invoked rcu_try_advance_all_cbs()
-from rcu_needs_cpu() or last invoked rcu_accelerate_cbs() from
-rcu_prepare_for_idle(). "dyntick_enabled: 1" indicates that dyntick-idle
-processing is enabled.
-
-If the grace period ends just as the stall warning starts printing,
-there will be a spurious stall-warning message, which will include
-the following:
-
-	INFO: Stall ended before state dump start
-
-This is rare, but does happen from time to time in real life.  It is also
-possible for a zero-jiffy stall to be flagged in this case, depending
-on how the stall warning and the grace-period initialization happen to
-interact.  Please note that it is not possible to entirely eliminate this
-sort of false positive without resorting to things like stop_machine(),
-which is overkill for this sort of problem.
-
-If all CPUs and tasks have passed through quiescent states, but the
-grace period has nevertheless failed to end, the stall-warning splat
-will include something like the following:
-
-	All QSes seen, last rcu_preempt kthread activity 23807 (4297905177-4297881370), jiffies_till_next_fqs=3, root ->qsmask 0x0
-
-The "23807" indicates that it has been more than 23 thousand jiffies
-since the grace-period kthread ran.  The "jiffies_till_next_fqs"
-indicates how frequently that kthread should run, giving the number
-of jiffies between force-quiescent-state scans, in this case three,
-which is way less than 23807.  Finally, the root rcu_node structure's
-->qsmask field is printed, which will normally be zero.
-
-If the relevant grace-period kthread has been unable to run prior to
-the stall warning, as was the case in the "All QSes seen" line above,
-the following additional line is printed:
-
-	kthread starved for 23807 jiffies! g7075 f0x0 RCU_GP_WAIT_FQS(3) ->state=0x1 ->cpu=5
-
-Starving the grace-period kthreads of CPU time can of course result
-in RCU CPU stall warnings even when all CPUs and tasks have passed
-through the required quiescent states.  The "g" number shows the current
-grace-period sequence number, the "f" precedes the ->gp_flags command
-to the grace-period kthread, the "RCU_GP_WAIT_FQS" indicates that the
-kthread is waiting for a short timeout, the "state" precedes value of the
-task_struct ->state field, and the "cpu" indicates that the grace-period
-kthread last ran on CPU 5.
-
-
-Multiple Warnings From One Stall
-
-If a stall lasts long enough, multiple stall-warning messages will be
-printed for it.  The second and subsequent messages are printed at
-longer intervals, so that the time between (say) the first and second
-message will be about three times the interval between the beginning
-of the stall and the first message.
-
-
-Stall Warnings for Expedited Grace Periods
-
-If an expedited grace period detects a stall, it will place a message
-like the following in dmesg:
-
-	INFO: rcu_sched detected expedited stalls on CPUs/tasks: { 7-... } 21119 jiffies s: 73 root: 0x2/.
-
-This indicates that CPU 7 has failed to respond to a reschedule IPI.
-The three periods (".") following the CPU number indicate that the CPU
-is online (otherwise the first period would instead have been "O"),
-that the CPU was online at the beginning of the expedited grace period
-(otherwise the second period would have instead been "o"), and that
-the CPU has been online at least once since boot (otherwise, the third
-period would instead have been "N").  The number before the "jiffies"
-indicates that the expedited grace period has been going on for 21,119
-jiffies.  The number following the "s:" indicates that the expedited
-grace-period sequence counter is 73.  The fact that this last value is
-odd indicates that an expedited grace period is in flight.  The number
-following "root:" is a bitmask that indicates which children of the root
-rcu_node structure correspond to CPUs and/or tasks that are blocking the
-current expedited grace period.  If the tree had more than one level,
-additional hex numbers would be printed for the states of the other
-rcu_node structures in the tree.
-
-As with normal grace periods, PREEMPT_RCU builds can be stalled by
-tasks as well as by CPUs, and that the tasks will be indicated by PID,
-for example, "P3421".
-
-It is entirely possible to see stall warnings from normal and from
-expedited grace periods at about the same time during the same run.
diff --git a/Documentation/RCU/torture.rst b/Documentation/RCU/torture.rst
new file mode 100644
index 0000000..a901477
--- /dev/null
+++ b/Documentation/RCU/torture.rst
@@ -0,0 +1,293 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==========================
+RCU Torture Test Operation
+==========================
+
+
+CONFIG_RCU_TORTURE_TEST
+=======================
+
+The CONFIG_RCU_TORTURE_TEST config option is available for all RCU
+implementations.  It creates an rcutorture kernel module that can
+be loaded to run a torture test.  The test periodically outputs
+status messages via printk(), which can be examined via the dmesg
+command (perhaps grepping for "torture").  The test is started
+when the module is loaded, and stops when the module is unloaded.
+
+Module parameters are prefixed by "rcutorture." in
+Documentation/admin-guide/kernel-parameters.txt.
+
+Output
+======
+
+The statistics output is as follows::
+
+	rcu-torture:--- Start of test: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4
+	rcu-torture: rtc:           (null) ver: 155441 tfle: 0 rta: 155441 rtaf: 8884 rtf: 155440 rtmbe: 0 rtbe: 0 rtbke: 0 rtbre: 0 rtbf: 0 rtb: 0 nt: 3055767
+	rcu-torture: Reader Pipe:  727860534 34213 0 0 0 0 0 0 0 0 0
+	rcu-torture: Reader Batch:  727877838 17003 0 0 0 0 0 0 0 0 0
+	rcu-torture: Free-Block Circulation:  155440 155440 155440 155440 155440 155440 155440 155440 155440 155440 0
+	rcu-torture:--- End of test: SUCCESS: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4
+
+The command "dmesg | grep torture:" will extract this information on
+most systems.  On more esoteric configurations, it may be necessary to
+use other commands to access the output of the printk()s used by
+the RCU torture test.  The printk()s use KERN_ALERT, so they should
+be evident.  ;-)
+
+The first and last lines show the rcutorture module parameters, and the
+last line shows either "SUCCESS" or "FAILURE", based on rcutorture's
+automatic determination as to whether RCU operated correctly.
+
+The entries are as follows:
+
+*	"rtc": The hexadecimal address of the structure currently visible
+	to readers.
+
+*	"ver": The number of times since boot that the RCU writer task
+	has changed the structure visible to readers.
+
+*	"tfle": If non-zero, indicates that the "torture freelist"
+	containing structures to be placed into the "rtc" area is empty.
+	This condition is important, since it can fool you into thinking
+	that RCU is working when it is not.  :-/
+
+*	"rta": Number of structures allocated from the torture freelist.
+
+*	"rtaf": Number of allocations from the torture freelist that have
+	failed due to the list being empty.  It is not unusual for this
+	to be non-zero, but it is bad for it to be a large fraction of
+	the value indicated by "rta".
+
+*	"rtf": Number of frees into the torture freelist.
+
+*	"rtmbe": A non-zero value indicates that rcutorture believes that
+	rcu_assign_pointer() and rcu_dereference() are not working
+	correctly.  This value should be zero.
+
+*	"rtbe": A non-zero value indicates that one of the rcu_barrier()
+	family of functions is not working correctly.
+
+*	"rtbke": rcutorture was unable to create the real-time kthreads
+	used to force RCU priority inversion.  This value should be zero.
+
+*	"rtbre": Although rcutorture successfully created the kthreads
+	used to force RCU priority inversion, it was unable to set them
+	to the real-time priority level of 1.  This value should be zero.
+
+*	"rtbf": The number of times that RCU priority boosting failed
+	to resolve RCU priority inversion.
+
+*	"rtb": The number of times that rcutorture attempted to force
+	an RCU priority inversion condition.  If you are testing RCU
+	priority boosting via the "test_boost" module parameter, this
+	value should be non-zero.
+
+*	"nt": The number of times rcutorture ran RCU read-side code from
+	within a timer handler.  This value should be non-zero only
+	if you specified the "irqreader" module parameter.
+
+*	"Reader Pipe": Histogram of "ages" of structures seen by readers.
+	If any entries past the first two are non-zero, RCU is broken.
+	And rcutorture prints the error flag string "!!!" to make sure
+	you notice.  The age of a newly allocated structure is zero,
+	it becomes one when removed from reader visibility, and is
+	incremented once per grace period subsequently -- and is freed
+	after passing through (RCU_TORTURE_PIPE_LEN-2) grace periods.
+
+	The output displayed above was taken from a correctly working
+	RCU.  If you want to see what it looks like when broken, break
+	it yourself.  ;-)
+
+*	"Reader Batch": Another histogram of "ages" of structures seen
+	by readers, but in terms of counter flips (or batches) rather
+	than in terms of grace periods.  The legal number of non-zero
+	entries is again two.  The reason for this separate view is that
+	it is sometimes easier to get the third entry to show up in the
+	"Reader Batch" list than in the "Reader Pipe" list.
+
+*	"Free-Block Circulation": Shows the number of torture structures
+	that have reached a given point in the pipeline.  The first element
+	should closely correspond to the number of structures allocated,
+	the second to the number that have been removed from reader view,
+	and all but the last remaining to the corresponding number of
+	passes through a grace period.  The last entry should be zero,
+	as it is only incremented if a torture structure's counter
+	somehow gets incremented farther than it should.
+
+Different implementations of RCU can provide implementation-specific
+additional information.  For example, Tree SRCU provides the following
+additional line::
+
+	srcud-torture: Tree SRCU per-CPU(idx=0): 0(35,-21) 1(-4,24) 2(1,1) 3(-26,20) 4(28,-47) 5(-9,4) 6(-10,14) 7(-14,11) T(1,6)
+
+This line shows the per-CPU counter state, in this case for Tree SRCU
+using a dynamically allocated srcu_struct (hence "srcud-" rather than
+"srcu-").  The numbers in parentheses are the values of the "old" and
+"current" counters for the corresponding CPU.  The "idx" value maps the
+"old" and "current" values to the underlying array, and is useful for
+debugging.  The final "T" entry contains the totals of the counters.
+
+Usage on Specific Kernel Builds
+===============================
+
+It is sometimes desirable to torture RCU on a specific kernel build,
+for example, when preparing to put that kernel build into production.
+In that case, the kernel should be built with CONFIG_RCU_TORTURE_TEST=m
+so that the test can be started using modprobe and terminated using rmmod.
+
+For example, the following script may be used to torture RCU::
+
+	#!/bin/sh
+
+	modprobe rcutorture
+	sleep 3600
+	rmmod rcutorture
+	dmesg | grep torture:
+
+The output can be manually inspected for the error flag of "!!!".
+One could of course create a more elaborate script that automatically
+checked for such errors.  The "rmmod" command forces a "SUCCESS",
+"FAILURE", or "RCU_HOTPLUG" indication to be printk()ed.  The first
+two are self-explanatory, while the last indicates that while there
+were no RCU failures, CPU-hotplug problems were detected.
+
+
+Usage on Mainline Kernels
+=========================
+
+When using rcutorture to test changes to RCU itself, it is often
+necessary to build a number of kernels in order to test that change
+across a broad range of combinations of the relevant Kconfig options
+and of the relevant kernel boot parameters.  In this situation, use
+of modprobe and rmmod can be quite time-consuming and error-prone.
+
+Therefore, the tools/testing/selftests/rcutorture/bin/kvm.sh
+script is available for mainline testing for x86, arm64, and
+powerpc.  By default, it will run the series of tests specified by
+tools/testing/selftests/rcutorture/configs/rcu/CFLIST, with each test
+running for 30 minutes within a guest OS using a minimal userspace
+supplied by an automatically generated initrd.  After the tests are
+complete, the resulting build products and console output are analyzed
+for errors and the results of the runs are summarized.
+
+On larger systems, rcutorture testing can be accelerated by passing the
+--cpus argument to kvm.sh.  For example, on a 64-CPU system, "--cpus 43"
+would use up to 43 CPUs to run tests concurrently, which as of v5.4 would
+complete all the scenarios in two batches, reducing the time to complete
+from about eight hours to about one hour (not counting the time to build
+the sixteen kernels).  The "--dryrun sched" argument will not run tests,
+but rather tell you how the tests would be scheduled into batches.  This
+can be useful when working out how many CPUs to specify in the --cpus
+argument.
+
+Not all changes require that all scenarios be run.  For example, a change
+to Tree SRCU might run only the SRCU-N and SRCU-P scenarios using the
+--configs argument to kvm.sh as follows:  "--configs 'SRCU-N SRCU-P'".
+Large systems can run multiple copies of of the full set of scenarios,
+for example, a system with 448 hardware threads can run five instances
+of the full set concurrently.  To make this happen::
+
+	kvm.sh --cpus 448 --configs '5*CFLIST'
+
+Alternatively, such a system can run 56 concurrent instances of a single
+eight-CPU scenario::
+
+	kvm.sh --cpus 448 --configs '56*TREE04'
+
+Or 28 concurrent instances of each of two eight-CPU scenarios::
+
+	kvm.sh --cpus 448 --configs '28*TREE03 28*TREE04'
+
+Of course, each concurrent instance will use memory, which can be
+limited using the --memory argument, which defaults to 512M.  Small
+values for memory may require disabling the callback-flooding tests
+using the --bootargs parameter discussed below.
+
+Sometimes additional debugging is useful, and in such cases the --kconfig
+parameter to kvm.sh may be used, for example, ``--kconfig 'CONFIG_KASAN=y'``.
+
+Kernel boot arguments can also be supplied, for example, to control
+rcutorture's module parameters.  For example, to test a change to RCU's
+CPU stall-warning code, use "--bootargs 'rcutorture.stall_cpu=30'".
+This will of course result in the scripting reporting a failure, namely
+the resuling RCU CPU stall warning.  As noted above, reducing memory may
+require disabling rcutorture's callback-flooding tests::
+
+	kvm.sh --cpus 448 --configs '56*TREE04' --memory 128M \
+		--bootargs 'rcutorture.fwd_progress=0'
+
+Sometimes all that is needed is a full set of kernel builds.  This is
+what the --buildonly argument does.
+
+Finally, the --trust-make argument allows each kernel build to reuse what
+it can from the previous kernel build.
+
+There are additional more arcane arguments that are documented in the
+source code of the kvm.sh script.
+
+If a run contains failures, the number of buildtime and runtime failures
+is listed at the end of the kvm.sh output, which you really should redirect
+to a file.  The build products and console output of each run is kept in
+tools/testing/selftests/rcutorture/res in timestamped directories.  A
+given directory can be supplied to kvm-find-errors.sh in order to have
+it cycle you through summaries of errors and full error logs.  For example::
+
+	tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh \
+		tools/testing/selftests/rcutorture/res/2020.01.20-15.54.23
+
+However, it is often more convenient to access the files directly.
+Files pertaining to all scenarios in a run reside in the top-level
+directory (2020.01.20-15.54.23 in the example above), while per-scenario
+files reside in a subdirectory named after the scenario (for example,
+"TREE04").  If a given scenario ran more than once (as in "--configs
+'56*TREE04'" above), the directories corresponding to the second and
+subsequent runs of that scenario include a sequence number, for example,
+"TREE04.2", "TREE04.3", and so on.
+
+The most frequently used file in the top-level directory is testid.txt.
+If the test ran in a git repository, then this file contains the commit
+that was tested and any uncommitted changes in diff format.
+
+The most frequently used files in each per-scenario-run directory are:
+
+.config:
+	This file contains the Kconfig options.
+
+Make.out:
+	This contains build output for a specific scenario.
+
+console.log:
+	This contains the console output for a specific scenario.
+	This file may be examined once the kernel has booted, but
+	it might not exist if the build failed.
+
+vmlinux:
+	This contains the kernel, which can be useful with tools like
+	objdump and gdb.
+
+A number of additional files are available, but are less frequently used.
+Many are intended for debugging of rcutorture itself or of its scripting.
+
+As of v5.4, a successful run with the default set of scenarios produces
+the following summary at the end of the run on a 12-CPU system::
+
+    SRCU-N ------- 804233 GPs (148.932/s) [srcu: g10008272 f0x0 ]
+    SRCU-P ------- 202320 GPs (37.4667/s) [srcud: g1809476 f0x0 ]
+    SRCU-t ------- 1122086 GPs (207.794/s) [srcu: g0 f0x0 ]
+    SRCU-u ------- 1111285 GPs (205.794/s) [srcud: g1 f0x0 ]
+    TASKS01 ------- 19666 GPs (3.64185/s) [tasks: g0 f0x0 ]
+    TASKS02 ------- 20541 GPs (3.80389/s) [tasks: g0 f0x0 ]
+    TASKS03 ------- 19416 GPs (3.59556/s) [tasks: g0 f0x0 ]
+    TINY01 ------- 836134 GPs (154.84/s) [rcu: g0 f0x0 ] n_max_cbs: 34198
+    TINY02 ------- 850371 GPs (157.476/s) [rcu: g0 f0x0 ] n_max_cbs: 2631
+    TREE01 ------- 162625 GPs (30.1157/s) [rcu: g1124169 f0x0 ]
+    TREE02 ------- 333003 GPs (61.6672/s) [rcu: g2647753 f0x0 ] n_max_cbs: 35844
+    TREE03 ------- 306623 GPs (56.782/s) [rcu: g2975325 f0x0 ] n_max_cbs: 1496497
+    CPU count limited from 16 to 12
+    TREE04 ------- 246149 GPs (45.5831/s) [rcu: g1695737 f0x0 ] n_max_cbs: 434961
+    TREE05 ------- 314603 GPs (58.2598/s) [rcu: g2257741 f0x2 ] n_max_cbs: 193997
+    TREE07 ------- 167347 GPs (30.9902/s) [rcu: g1079021 f0x0 ] n_max_cbs: 478732
+    CPU count limited from 16 to 12
+    TREE09 ------- 752238 GPs (139.303/s) [rcu: g13075057 f0x0 ] n_max_cbs: 99011
diff --git a/Documentation/RCU/torture.txt b/Documentation/RCU/torture.txt
deleted file mode 100644
index af712a3..0000000
--- a/Documentation/RCU/torture.txt
+++ /dev/null
@@ -1,282 +0,0 @@
-RCU Torture Test Operation
-
-
-CONFIG_RCU_TORTURE_TEST
-
-The CONFIG_RCU_TORTURE_TEST config option is available for all RCU
-implementations.  It creates an rcutorture kernel module that can
-be loaded to run a torture test.  The test periodically outputs
-status messages via printk(), which can be examined via the dmesg
-command (perhaps grepping for "torture").  The test is started
-when the module is loaded, and stops when the module is unloaded.
-
-Module parameters are prefixed by "rcutorture." in
-Documentation/admin-guide/kernel-parameters.txt.
-
-OUTPUT
-
-The statistics output is as follows:
-
-	rcu-torture:--- Start of test: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4
-	rcu-torture: rtc:           (null) ver: 155441 tfle: 0 rta: 155441 rtaf: 8884 rtf: 155440 rtmbe: 0 rtbe: 0 rtbke: 0 rtbre: 0 rtbf: 0 rtb: 0 nt: 3055767
-	rcu-torture: Reader Pipe:  727860534 34213 0 0 0 0 0 0 0 0 0
-	rcu-torture: Reader Batch:  727877838 17003 0 0 0 0 0 0 0 0 0
-	rcu-torture: Free-Block Circulation:  155440 155440 155440 155440 155440 155440 155440 155440 155440 155440 0
-	rcu-torture:--- End of test: SUCCESS: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4
-
-The command "dmesg | grep torture:" will extract this information on
-most systems.  On more esoteric configurations, it may be necessary to
-use other commands to access the output of the printk()s used by
-the RCU torture test.  The printk()s use KERN_ALERT, so they should
-be evident.  ;-)
-
-The first and last lines show the rcutorture module parameters, and the
-last line shows either "SUCCESS" or "FAILURE", based on rcutorture's
-automatic determination as to whether RCU operated correctly.
-
-The entries are as follows:
-
-o	"rtc": The hexadecimal address of the structure currently visible
-	to readers.
-
-o	"ver": The number of times since boot that the RCU writer task
-	has changed the structure visible to readers.
-
-o	"tfle": If non-zero, indicates that the "torture freelist"
-	containing structures to be placed into the "rtc" area is empty.
-	This condition is important, since it can fool you into thinking
-	that RCU is working when it is not.  :-/
-
-o	"rta": Number of structures allocated from the torture freelist.
-
-o	"rtaf": Number of allocations from the torture freelist that have
-	failed due to the list being empty.  It is not unusual for this
-	to be non-zero, but it is bad for it to be a large fraction of
-	the value indicated by "rta".
-
-o	"rtf": Number of frees into the torture freelist.
-
-o	"rtmbe": A non-zero value indicates that rcutorture believes that
-	rcu_assign_pointer() and rcu_dereference() are not working
-	correctly.  This value should be zero.
-
-o	"rtbe": A non-zero value indicates that one of the rcu_barrier()
-	family of functions is not working correctly.
-
-o	"rtbke": rcutorture was unable to create the real-time kthreads
-	used to force RCU priority inversion.  This value should be zero.
-
-o	"rtbre": Although rcutorture successfully created the kthreads
-	used to force RCU priority inversion, it was unable to set them
-	to the real-time priority level of 1.  This value should be zero.
-
-o	"rtbf": The number of times that RCU priority boosting failed
-	to resolve RCU priority inversion.
-
-o	"rtb": The number of times that rcutorture attempted to force
-	an RCU priority inversion condition.  If you are testing RCU
-	priority boosting via the "test_boost" module parameter, this
-	value should be non-zero.
-
-o	"nt": The number of times rcutorture ran RCU read-side code from
-	within a timer handler.  This value should be non-zero only
-	if you specified the "irqreader" module parameter.
-
-o	"Reader Pipe": Histogram of "ages" of structures seen by readers.
-	If any entries past the first two are non-zero, RCU is broken.
-	And rcutorture prints the error flag string "!!!" to make sure
-	you notice.  The age of a newly allocated structure is zero,
-	it becomes one when removed from reader visibility, and is
-	incremented once per grace period subsequently -- and is freed
-	after passing through (RCU_TORTURE_PIPE_LEN-2) grace periods.
-
-	The output displayed above was taken from a correctly working
-	RCU.  If you want to see what it looks like when broken, break
-	it yourself.  ;-)
-
-o	"Reader Batch": Another histogram of "ages" of structures seen
-	by readers, but in terms of counter flips (or batches) rather
-	than in terms of grace periods.  The legal number of non-zero
-	entries is again two.  The reason for this separate view is that
-	it is sometimes easier to get the third entry to show up in the
-	"Reader Batch" list than in the "Reader Pipe" list.
-
-o	"Free-Block Circulation": Shows the number of torture structures
-	that have reached a given point in the pipeline.  The first element
-	should closely correspond to the number of structures allocated,
-	the second to the number that have been removed from reader view,
-	and all but the last remaining to the corresponding number of
-	passes through a grace period.  The last entry should be zero,
-	as it is only incremented if a torture structure's counter
-	somehow gets incremented farther than it should.
-
-Different implementations of RCU can provide implementation-specific
-additional information.  For example, Tree SRCU provides the following
-additional line:
-
-	srcud-torture: Tree SRCU per-CPU(idx=0): 0(35,-21) 1(-4,24) 2(1,1) 3(-26,20) 4(28,-47) 5(-9,4) 6(-10,14) 7(-14,11) T(1,6)
-
-This line shows the per-CPU counter state, in this case for Tree SRCU
-using a dynamically allocated srcu_struct (hence "srcud-" rather than
-"srcu-").  The numbers in parentheses are the values of the "old" and
-"current" counters for the corresponding CPU.  The "idx" value maps the
-"old" and "current" values to the underlying array, and is useful for
-debugging.  The final "T" entry contains the totals of the counters.
-
-
-USAGE ON SPECIFIC KERNEL BUILDS
-
-It is sometimes desirable to torture RCU on a specific kernel build,
-for example, when preparing to put that kernel build into production.
-In that case, the kernel should be built with CONFIG_RCU_TORTURE_TEST=m
-so that the test can be started using modprobe and terminated using rmmod.
-
-For example, the following script may be used to torture RCU:
-
-	#!/bin/sh
-
-	modprobe rcutorture
-	sleep 3600
-	rmmod rcutorture
-	dmesg | grep torture:
-
-The output can be manually inspected for the error flag of "!!!".
-One could of course create a more elaborate script that automatically
-checked for such errors.  The "rmmod" command forces a "SUCCESS",
-"FAILURE", or "RCU_HOTPLUG" indication to be printk()ed.  The first
-two are self-explanatory, while the last indicates that while there
-were no RCU failures, CPU-hotplug problems were detected.
-
-
-USAGE ON MAINLINE KERNELS
-
-When using rcutorture to test changes to RCU itself, it is often
-necessary to build a number of kernels in order to test that change
-across a broad range of combinations of the relevant Kconfig options
-and of the relevant kernel boot parameters.  In this situation, use
-of modprobe and rmmod can be quite time-consuming and error-prone.
-
-Therefore, the tools/testing/selftests/rcutorture/bin/kvm.sh
-script is available for mainline testing for x86, arm64, and
-powerpc.  By default, it will run the series of tests specified by
-tools/testing/selftests/rcutorture/configs/rcu/CFLIST, with each test
-running for 30 minutes within a guest OS using a minimal userspace
-supplied by an automatically generated initrd.  After the tests are
-complete, the resulting build products and console output are analyzed
-for errors and the results of the runs are summarized.
-
-On larger systems, rcutorture testing can be accelerated by passing the
---cpus argument to kvm.sh.  For example, on a 64-CPU system, "--cpus 43"
-would use up to 43 CPUs to run tests concurrently, which as of v5.4 would
-complete all the scenarios in two batches, reducing the time to complete
-from about eight hours to about one hour (not counting the time to build
-the sixteen kernels).  The "--dryrun sched" argument will not run tests,
-but rather tell you how the tests would be scheduled into batches.  This
-can be useful when working out how many CPUs to specify in the --cpus
-argument.
-
-Not all changes require that all scenarios be run.  For example, a change
-to Tree SRCU might run only the SRCU-N and SRCU-P scenarios using the
---configs argument to kvm.sh as follows:  "--configs 'SRCU-N SRCU-P'".
-Large systems can run multiple copies of of the full set of scenarios,
-for example, a system with 448 hardware threads can run five instances
-of the full set concurrently.  To make this happen:
-
-	kvm.sh --cpus 448 --configs '5*CFLIST'
-
-Alternatively, such a system can run 56 concurrent instances of a single
-eight-CPU scenario:
-
-	kvm.sh --cpus 448 --configs '56*TREE04'
-
-Or 28 concurrent instances of each of two eight-CPU scenarios:
-
-	kvm.sh --cpus 448 --configs '28*TREE03 28*TREE04'
-
-Of course, each concurrent instance will use memory, which can be
-limited using the --memory argument, which defaults to 512M.  Small
-values for memory may require disabling the callback-flooding tests
-using the --bootargs parameter discussed below.
-
-Sometimes additional debugging is useful, and in such cases the --kconfig
-parameter to kvm.sh may be used, for example, "--kconfig 'CONFIG_KASAN=y'".
-
-Kernel boot arguments can also be supplied, for example, to control
-rcutorture's module parameters.  For example, to test a change to RCU's
-CPU stall-warning code, use "--bootargs 'rcutorture.stall_cpu=30'".
-This will of course result in the scripting reporting a failure, namely
-the resuling RCU CPU stall warning.  As noted above, reducing memory may
-require disabling rcutorture's callback-flooding tests:
-
-	kvm.sh --cpus 448 --configs '56*TREE04' --memory 128M \
-		--bootargs 'rcutorture.fwd_progress=0'
-
-Sometimes all that is needed is a full set of kernel builds.  This is
-what the --buildonly argument does.
-
-Finally, the --trust-make argument allows each kernel build to reuse what
-it can from the previous kernel build.
-
-There are additional more arcane arguments that are documented in the
-source code of the kvm.sh script.
-
-If a run contains failures, the number of buildtime and runtime failures
-is listed at the end of the kvm.sh output, which you really should redirect
-to a file.  The build products and console output of each run is kept in
-tools/testing/selftests/rcutorture/res in timestamped directories.  A
-given directory can be supplied to kvm-find-errors.sh in order to have
-it cycle you through summaries of errors and full error logs.  For example:
-
-	tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh \
-		tools/testing/selftests/rcutorture/res/2020.01.20-15.54.23
-
-However, it is often more convenient to access the files directly.
-Files pertaining to all scenarios in a run reside in the top-level
-directory (2020.01.20-15.54.23 in the example above), while per-scenario
-files reside in a subdirectory named after the scenario (for example,
-"TREE04").  If a given scenario ran more than once (as in "--configs
-'56*TREE04'" above), the directories corresponding to the second and
-subsequent runs of that scenario include a sequence number, for example,
-"TREE04.2", "TREE04.3", and so on.
-
-The most frequently used file in the top-level directory is testid.txt.
-If the test ran in a git repository, then this file contains the commit
-that was tested and any uncommitted changes in diff format.
-
-The most frequently used files in each per-scenario-run directory are:
-
-.config: This file contains the Kconfig options.
-
-Make.out: This contains build output for a specific scenario.
-
-console.log: This contains the console output for a specific scenario.
-	This file may be examined once the kernel has booted, but
-	it might not exist if the build failed.
-
-vmlinux: This contains the kernel, which can be useful with tools like
-	objdump and gdb.
-
-A number of additional files are available, but are less frequently used.
-Many are intended for debugging of rcutorture itself or of its scripting.
-
-As of v5.4, a successful run with the default set of scenarios produces
-the following summary at the end of the run on a 12-CPU system:
-
-SRCU-N ------- 804233 GPs (148.932/s) [srcu: g10008272 f0x0 ]
-SRCU-P ------- 202320 GPs (37.4667/s) [srcud: g1809476 f0x0 ]
-SRCU-t ------- 1122086 GPs (207.794/s) [srcu: g0 f0x0 ]
-SRCU-u ------- 1111285 GPs (205.794/s) [srcud: g1 f0x0 ]
-TASKS01 ------- 19666 GPs (3.64185/s) [tasks: g0 f0x0 ]
-TASKS02 ------- 20541 GPs (3.80389/s) [tasks: g0 f0x0 ]
-TASKS03 ------- 19416 GPs (3.59556/s) [tasks: g0 f0x0 ]
-TINY01 ------- 836134 GPs (154.84/s) [rcu: g0 f0x0 ] n_max_cbs: 34198
-TINY02 ------- 850371 GPs (157.476/s) [rcu: g0 f0x0 ] n_max_cbs: 2631
-TREE01 ------- 162625 GPs (30.1157/s) [rcu: g1124169 f0x0 ]
-TREE02 ------- 333003 GPs (61.6672/s) [rcu: g2647753 f0x0 ] n_max_cbs: 35844
-TREE03 ------- 306623 GPs (56.782/s) [rcu: g2975325 f0x0 ] n_max_cbs: 1496497
-CPU count limited from 16 to 12
-TREE04 ------- 246149 GPs (45.5831/s) [rcu: g1695737 f0x0 ] n_max_cbs: 434961
-TREE05 ------- 314603 GPs (58.2598/s) [rcu: g2257741 f0x2 ] n_max_cbs: 193997
-TREE07 ------- 167347 GPs (30.9902/s) [rcu: g1079021 f0x0 ] n_max_cbs: 478732
-CPU count limited from 16 to 12
-TREE09 ------- 752238 GPs (139.303/s) [rcu: g13075057 f0x0 ] n_max_cbs: 99011
diff --git a/Documentation/admin-guide/LSM/Yama.rst b/Documentation/admin-guide/LSM/Yama.rst
index d0a060d..d9cd937 100644
--- a/Documentation/admin-guide/LSM/Yama.rst
+++ b/Documentation/admin-guide/LSM/Yama.rst
@@ -19,9 +19,10 @@
 etc) to extract additional credentials and continue to expand the scope
 of their attack without resorting to user-assisted phishing.
 
-This is not a theoretical problem. SSH session hijacking
-(http://www.storm.net.nz/projects/7) and arbitrary code injection
-(http://c-skills.blogspot.com/2007/05/injectso.html) attacks already
+This is not a theoretical problem. `SSH session hijacking
+<https://www.blackhat.com/presentations/bh-usa-05/bh-us-05-boileau.pdf>`_
+and `arbitrary code injection
+<https://c-skills.blogspot.com/2007/05/injectso.html>`_ attacks already
 exist and remain possible if ptrace is allowed to operate as before.
 Since ptrace is not commonly used by non-developers and non-admins, system
 builders should be allowed the option to disable this debugging system.
diff --git a/Documentation/admin-guide/blockdev/drbd/index.rst b/Documentation/admin-guide/blockdev/drbd/index.rst
index 68ecd5c..561fd1e 100644
--- a/Documentation/admin-guide/blockdev/drbd/index.rst
+++ b/Documentation/admin-guide/blockdev/drbd/index.rst
@@ -10,7 +10,7 @@
   clusters and in this context, is a "drop-in" replacement for shared
   storage. Simplistically, you could see it as a network RAID 1.
 
-  Please visit http://www.drbd.org to find out more.
+  Please visit https://www.drbd.org to find out more.
 
 .. toctree::
    :maxdepth: 1
diff --git a/Documentation/admin-guide/blockdev/floppy.rst b/Documentation/admin-guide/blockdev/floppy.rst
index 4a8f31c..0328438 100644
--- a/Documentation/admin-guide/blockdev/floppy.rst
+++ b/Documentation/admin-guide/blockdev/floppy.rst
@@ -6,7 +6,7 @@
 =========
 
 A FAQ list may be found in the fdutils package (see below), and also
-at <http://fdutils.linux.lu/faq.html>.
+at <https://fdutils.linux.lu/faq.html>.
 
 
 LILO configuration options (Thinkpad users, read this)
@@ -220,11 +220,11 @@
 
 The latest version can be found at fdutils homepage:
 
- http://fdutils.linux.lu
+ https://fdutils.linux.lu
 
 The fdutils releases can be found at:
 
- http://fdutils.linux.lu/download.html
+ https://fdutils.linux.lu/download.html
 
  http://www.tux.org/pub/knaff/fdutils/
 
diff --git a/Documentation/admin-guide/bootconfig.rst b/Documentation/admin-guide/bootconfig.rst
index d6b3b77..a22024f 100644
--- a/Documentation/admin-guide/bootconfig.rst
+++ b/Documentation/admin-guide/bootconfig.rst
@@ -71,6 +71,16 @@
  foo = bar, baz
  foo = qux  # !ERROR! we can not re-define same key
 
+If you want to update the value, you must use the override operator
+``:=`` explicitly. For example::
+
+ foo = bar, baz
+ foo := qux
+
+then, the ``qux`` is assigned to ``foo`` key. This is useful for
+overriding the default value by adding (partial) custom bootconfigs
+without parsing the default bootconfig.
+
 If you want to append the value to existing key as an array member,
 you can use ``+=`` operator. For example::
 
@@ -84,6 +94,7 @@
 
  foo = value1
  foo.bar = value2 # !ERROR! subkey "bar" and value "value1" can NOT co-exist
+ foo.bar := value2 # !ERROR! even with the override operator, this is NOT allowed.
 
 
 Comments
diff --git a/Documentation/admin-guide/cgroup-v1/rdma.rst b/Documentation/admin-guide/cgroup-v1/rdma.rst
index 2fcb0a9b..e69369b 100644
--- a/Documentation/admin-guide/cgroup-v1/rdma.rst
+++ b/Documentation/admin-guide/cgroup-v1/rdma.rst
@@ -114,4 +114,4 @@
 
 (d) Delete resource limit::
 
-	echo echo mlx4_0 hca_handle=max hca_object=max > /sys/fs/cgroup/rdma/1/rdma.max
+	echo mlx4_0 hca_handle=max hca_object=max > /sys/fs/cgroup/rdma/1/rdma.max
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index d09471a..6be4378 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1274,6 +1274,10 @@
 		Amount of memory used for storing in-kernel data
 		structures.
 
+	  percpu
+		Amount of memory used for storing per-cpu kernel
+		data structures.
+
 	  sock
 		Amount of memory used in network transmission buffers
 
@@ -1483,8 +1487,7 @@
 ~~~~~~~~~~~~~~~~~~
 
   io.stat
-	A read-only nested-keyed file which exists on non-root
-	cgroups.
+	A read-only nested-keyed file.
 
 	Lines are keyed by $MAJ:$MIN device numbers and not ordered.
 	The following nested keys are defined.
@@ -1684,9 +1687,9 @@
 of the two is enforced.
 
 cgroup writeback requires explicit support from the underlying
-filesystem.  Currently, cgroup writeback is implemented on ext2, ext4
-and btrfs.  On other filesystems, all writeback IOs are attributed to
-the root cgroup.
+filesystem.  Currently, cgroup writeback is implemented on ext2, ext4,
+btrfs, f2fs, and xfs.  On other filesystems, all writeback IOs are 
+attributed to the root cgroup.
 
 There are inherent differences in memory and writeback management
 which affects how cgroup ownership is tracked.  Memory is tracked per
@@ -2043,7 +2046,7 @@
 ----
 
 The "rdma" controller regulates the distribution and accounting of
-of RDMA resources.
+RDMA resources.
 
 RDMA Interface Files
 ~~~~~~~~~~~~~~~~~~~~
diff --git a/Documentation/admin-guide/cifs/todo.rst b/Documentation/admin-guide/cifs/todo.rst
index 084c25f..25f1157 100644
--- a/Documentation/admin-guide/cifs/todo.rst
+++ b/Documentation/admin-guide/cifs/todo.rst
@@ -98,7 +98,7 @@
 Known Bugs
 ==========
 
-See http://bugzilla.samba.org - search on product "CifsVFS" for
+See https://bugzilla.samba.org - search on product "CifsVFS" for
 current bug list.  Also check http://bugzilla.kernel.org (Product = File System, Component = CIFS)
 
 1) existing symbolic links (Windows reparse points) are recognized but
diff --git a/Documentation/admin-guide/cifs/usage.rst b/Documentation/admin-guide/cifs/usage.rst
index d3fb67b..7b32d50 100644
--- a/Documentation/admin-guide/cifs/usage.rst
+++ b/Documentation/admin-guide/cifs/usage.rst
@@ -16,8 +16,7 @@
 
 Please see
 MS-SMB2 (for detailed SMB2/SMB3/SMB3.1.1 protocol specification)
-http://protocolfreedom.org/ and
-http://samba.org/samba/PFIF/
+or https://samba.org/samba/PFIF/
 for more details.
 
 
@@ -32,7 +31,7 @@
 
 For Linux:
 
-1) Download the kernel (e.g. from http://www.kernel.org)
+1) Download the kernel (e.g. from https://www.kernel.org)
    and change directory into the top of the kernel directory tree
    (e.g. /usr/src/linux-2.5.73)
 2) make menuconfig (or make xconfig)
@@ -831,7 +830,7 @@
 Enabling Kerberos (extended security) works but requires version 1.2 or later
 of the helper program cifs.upcall to be present and to be configured in the
 /etc/request-key.conf file.  The cifs.upcall helper program is from the Samba
-project(http://www.samba.org). NTLM and NTLMv2 and LANMAN support do not
+project(https://www.samba.org). NTLM and NTLMv2 and LANMAN support do not
 require this helper. Note that NTLMv2 security (which does not require the
 cifs.upcall helper program), instead of using Kerberos, is sufficient for
 some use cases.
diff --git a/Documentation/admin-guide/cifs/winucase_convert.pl b/Documentation/admin-guide/cifs/winucase_convert.pl
index 322a9c8..993186b 100755
--- a/Documentation/admin-guide/cifs/winucase_convert.pl
+++ b/Documentation/admin-guide/cifs/winucase_convert.pl
@@ -16,7 +16,7 @@
 #   GNU General Public License for more details.
 #
 #   You should have received a copy of the GNU General Public License
-#   along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#   along with this program.  If not, see <https://www.gnu.org/licenses/>.
 #
 
 while(<>) {
diff --git a/Documentation/admin-guide/dell_rbu.rst b/Documentation/admin-guide/dell_rbu.rst
index 8d70e1f..2196caf 100644
--- a/Documentation/admin-guide/dell_rbu.rst
+++ b/Documentation/admin-guide/dell_rbu.rst
@@ -26,7 +26,7 @@
 OpenManage and Dell Update packages (DUP).
 
 Libsmbios can also be used to update BIOS on Dell systems go to
-http://linux.dell.com/libsmbios/ for details.
+https://linux.dell.com/libsmbios/ for details.
 
 Dell_RBU driver supports BIOS update using the monolithic image and packetized
 image methods. In case of monolithic the driver allocates a contiguous chunk
diff --git a/Documentation/admin-guide/device-mapper/dm-dust.rst b/Documentation/admin-guide/device-mapper/dm-dust.rst
index b6e7e7e..e35ec8c 100644
--- a/Documentation/admin-guide/device-mapper/dm-dust.rst
+++ b/Documentation/admin-guide/device-mapper/dm-dust.rst
@@ -69,10 +69,11 @@
         $ sudo dmsetup create dust1 --table '0 33552384 dust /dev/vdb1 0 4096'
 
 Check the status of the read behavior ("bypass" indicates that all I/O
-will be passed through to the underlying device)::
+will be passed through to the underlying device; "verbose" indicates that
+bad block additions, removals, and remaps will be verbosely logged)::
 
         $ sudo dmsetup status dust1
-        0 33552384 dust 252:17 bypass
+        0 33552384 dust 252:17 bypass verbose
 
         $ sudo dd if=/dev/mapper/dust1 of=/dev/null bs=512 count=128 iflag=direct
         128+0 records in
@@ -164,7 +165,7 @@
 A message will print with the number of bad blocks currently
 configured on the device::
 
-        kernel: device-mapper: dust: countbadblocks: 895 badblock(s) found
+        countbadblocks: 895 badblock(s) found
 
 Querying for specific bad blocks
 --------------------------------
@@ -176,11 +177,11 @@
 
 The following message will print if the block is in the list::
 
-        device-mapper: dust: queryblock: block 72 found in badblocklist
+        dust_query_block: block 72 found in badblocklist
 
 The following message will print if the block is not in the list::
 
-        device-mapper: dust: queryblock: block 72 not found in badblocklist
+        dust_query_block: block 72 not found in badblocklist
 
 The "queryblock" message command will work in both the "enabled"
 and "disabled" modes, allowing the verification of whether a block
@@ -198,12 +199,28 @@
 
 After clearing the bad block list, the following message will appear::
 
-        kernel: device-mapper: dust: clearbadblocks: badblocks cleared
+        dust_clear_badblocks: badblocks cleared
 
 If there were no bad blocks to clear, the following message will
 appear::
 
-        kernel: device-mapper: dust: clearbadblocks: no badblocks found
+        dust_clear_badblocks: no badblocks found
+
+Listing the bad block list
+--------------------------
+
+To list all bad blocks in the bad block list (using an example device
+with blocks 1 and 2 in the bad block list), run the following message
+command::
+
+        $ sudo dmsetup message dust1 0 listbadblocks
+        1
+        2
+
+If there are no bad blocks in the bad block list, the command will
+execute with no output::
+
+        $ sudo dmsetup message dust1 0 listbadblocks
 
 Message commands list
 ---------------------
@@ -223,6 +240,7 @@
 
         countbadblocks
         clearbadblocks
+        listbadblocks
         disable
         enable
         quiet
diff --git a/Documentation/admin-guide/device-mapper/dm-integrity.rst b/Documentation/admin-guide/device-mapper/dm-integrity.rst
index 9edd455..3ab4f77 100644
--- a/Documentation/admin-guide/device-mapper/dm-integrity.rst
+++ b/Documentation/admin-guide/device-mapper/dm-integrity.rst
@@ -45,7 +45,7 @@
    will format the device
 3. unload the dm-integrity target
 4. read the "provided_data_sectors" value from the superblock
-5. load the dm-integrity target with the the target size
+5. load the dm-integrity target with the target size
    "provided_data_sectors"
 6. if you want to use dm-integrity with dm-crypt, load the dm-crypt target
    with the size "provided_data_sectors"
@@ -99,7 +99,7 @@
 	the superblock is used.
 
 meta_device:device
-	Don't interleave the data and metadata on on device. Use a
+	Don't interleave the data and metadata on the device. Use a
 	separate device for metadata.
 
 buffer_sectors:number
diff --git a/Documentation/admin-guide/device-mapper/dm-raid.rst b/Documentation/admin-guide/device-mapper/dm-raid.rst
index 695a2ea..7ef9fe6 100644
--- a/Documentation/admin-guide/device-mapper/dm-raid.rst
+++ b/Documentation/admin-guide/device-mapper/dm-raid.rst
@@ -71,7 +71,7 @@
   ============= ===============================================================
 
   Reference: Chapter 4 of
-  http://www.snia.org/sites/default/files/SNIA_DDF_Technical_Position_v2.0.pdf
+  https://www.snia.org/sites/default/files/SNIA_DDF_Technical_Position_v2.0.pdf
 
 <#raid_params>: The number of parameters that follow.
 
diff --git a/Documentation/admin-guide/device-mapper/dm-zoned.rst b/Documentation/admin-guide/device-mapper/dm-zoned.rst
index 553752e..e6350413 100644
--- a/Documentation/admin-guide/device-mapper/dm-zoned.rst
+++ b/Documentation/admin-guide/device-mapper/dm-zoned.rst
@@ -14,7 +14,7 @@
 For a more detailed description of the zoned block device models and
 their constraints see (for SCSI devices):
 
-http://www.t10.org/drafts.htm#ZBC_Family
+https://www.t10.org/drafts.htm#ZBC_Family
 
 and (for ATA devices):
 
diff --git a/Documentation/admin-guide/device-mapper/verity.rst b/Documentation/admin-guide/device-mapper/verity.rst
index bb02caa..66f71f0 100644
--- a/Documentation/admin-guide/device-mapper/verity.rst
+++ b/Documentation/admin-guide/device-mapper/verity.rst
@@ -83,6 +83,10 @@
     not compatible with ignore_corruption and requires user space support to
     avoid restart loops.
 
+panic_on_corruption
+    Panic the device when a corrupted block is discovered. This option is
+    not compatible with ignore_corruption and restart_on_corruption.
+
 ignore_zero_blocks
     Do not verify blocks that are expected to contain zeroes and always return
     zeroes instead. This may be useful if the partition contains unused blocks
diff --git a/Documentation/admin-guide/devices.txt b/Documentation/admin-guide/devices.txt
index 2a97aae..63fd4e6 100644
--- a/Documentation/admin-guide/devices.txt
+++ b/Documentation/admin-guide/devices.txt
@@ -375,8 +375,9 @@
 		239 = /dev/uhid		User-space I/O driver support for HID subsystem
 		240 = /dev/userio	Serio driver testing device
 		241 = /dev/vhost-vsock	Host kernel driver for virtio vsock
+		242 = /dev/rfkill	Turning off radio transmissions (rfkill)
 
-		242-254			Reserved for local use
+		243-254			Reserved for local use
 		255			Reserved for MISC_DYNAMIC_MINOR
 
   11 char	Raw keyboard device	(Linux/SPARC only)
@@ -1442,7 +1443,7 @@
 		    ...
 
 		The driver and documentation may be obtained from
-		http://www.winradio.com/
+		https://www.winradio.com/
 
   82 block	I2O hard disk
 		  0 = /dev/i2o/hdag	33rd I2O hard disk, whole disk
@@ -1656,12 +1657,12 @@
 		dynamically, so there is no fixed mapping from subdevice
 		pathnames to minor numbers.
 
-		See http://www.comedi.org/ for information about the Comedi
+		See https://www.comedi.org/ for information about the Comedi
 		project.
 
   98 block	User-mode virtual block device
 		  0 = /dev/ubda		First user-mode block device
-		 16 = /dev/udbb		Second user-mode block device
+		 16 = /dev/ubdb		Second user-mode block device
 		    ...
 
 		Partitions are handled in the same way as for IDE
@@ -1723,7 +1724,7 @@
 		implementations a kernel presence for caching and easy
 		mounting.  For more information about the project,
 		write to <arla-drinkers@stacken.kth.se> or see
-		http://www.stacken.kth.se/project/arla/
+		https://www.stacken.kth.se/project/arla/
 
  103 block	Audit device
 		  0 = /dev/audit	Audit device
diff --git a/Documentation/admin-guide/dynamic-debug-howto.rst b/Documentation/admin-guide/dynamic-debug-howto.rst
index 1012bd9..e5a8def 100644
--- a/Documentation/admin-guide/dynamic-debug-howto.rst
+++ b/Documentation/admin-guide/dynamic-debug-howto.rst
@@ -70,10 +70,10 @@
 
   nullarbor:~ # cat <debugfs>/dynamic_debug/control
   # filename:lineno [module]function flags format
-  /usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svc_rdma.c:323 [svcxprt_rdma]svc_rdma_cleanup =_ "SVCRDMA Module Removed, deregister RPC RDMA transport\012"
-  /usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svc_rdma.c:341 [svcxprt_rdma]svc_rdma_init =_ "\011max_inline       : %d\012"
-  /usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svc_rdma.c:340 [svcxprt_rdma]svc_rdma_init =_ "\011sq_depth         : %d\012"
-  /usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svc_rdma.c:338 [svcxprt_rdma]svc_rdma_init =_ "\011max_requests     : %d\012"
+  net/sunrpc/svc_rdma.c:323 [svcxprt_rdma]svc_rdma_cleanup =_ "SVCRDMA Module Removed, deregister RPC RDMA transport\012"
+  net/sunrpc/svc_rdma.c:341 [svcxprt_rdma]svc_rdma_init =_ "\011max_inline       : %d\012"
+  net/sunrpc/svc_rdma.c:340 [svcxprt_rdma]svc_rdma_init =_ "\011sq_depth         : %d\012"
+  net/sunrpc/svc_rdma.c:338 [svcxprt_rdma]svc_rdma_init =_ "\011max_requests     : %d\012"
   ...
 
 
@@ -93,7 +93,7 @@
 
   nullarbor:~ # awk '$3 != "=_"' <debugfs>/dynamic_debug/control
   # filename:lineno [module]function flags format
-  /usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svcsock.c:1603 [sunrpc]svc_send p "svc_process: st_sendto returned %d\012"
+  net/sunrpc/svcsock.c:1603 [sunrpc]svc_send p "svc_process: st_sendto returned %d\012"
 
 Command Language Reference
 ==========================
@@ -156,6 +156,7 @@
   ``line-range`` cannot contain space, e.g.
   "1-30" is valid range but "1 - 30" is not.
 
+  ``module=foo`` combined keyword=value form is interchangably accepted
 
 The meanings of each keyword are:
 
@@ -164,15 +165,18 @@
     of each callsite.  Example::
 
 	func svc_tcp_accept
+	func *recv*		# in rfcomm, bluetooth, ping, tcp
 
 file
-    The given string is compared against either the full pathname, the
-    src-root relative pathname, or the basename of the source file of
-    each callsite.  Examples::
+    The given string is compared against either the src-root relative
+    pathname, or the basename of the source file of each callsite.
+    Examples::
 
 	file svcsock.c
-	file kernel/freezer.c
-	file /usr/src/packages/BUILD/sgi-enhancednfs-1.4/default/net/sunrpc/svcsock.c
+	file kernel/freezer.c	# ie column 1 of control file
+	file drivers/usb/*	# all callsites under it
+	file inode.c:start_*	# parse :tail as a func (above)
+	file inode.c:1-100	# parse :tail as a line-range (above)
 
 module
     The given string is compared against the module name
@@ -182,6 +186,7 @@
 
 	module sunrpc
 	module nfsd
+	module drm*	# both drm, drm_kms_helper
 
 format
     The given string is searched for in the dynamic debug format
@@ -251,8 +256,8 @@
 bootloader may impose lower limits.
 
 These ``dyndbg`` params are processed just after the ddebug tables are
-processed, as part of the arch_initcall.  Thus you can enable debug
-messages in all code run after this arch_initcall via this boot
+processed, as part of the early_initcall.  Thus you can enable debug
+messages in all code run after this early_initcall via this boot
 parameter.
 
 On an x86 system for example ACPI enablement is a subsys_initcall and::
diff --git a/Documentation/admin-guide/ext4.rst b/Documentation/admin-guide/ext4.rst
index 9443fce..d2795ca 100644
--- a/Documentation/admin-guide/ext4.rst
+++ b/Documentation/admin-guide/ext4.rst
@@ -395,6 +395,13 @@
         Documentation/filesystems/dax.txt.  Note that this option is
         incompatible with data=journal.
 
+  inlinecrypt
+        When possible, encrypt/decrypt the contents of encrypted files using the
+        blk-crypto framework rather than filesystem-layer encryption. This
+        allows the use of inline encryption hardware. The on-disk format is
+        unaffected. For more details, see
+        Documentation/block/inline-encryption.rst.
+
 Data Mode
 =========
 There are 3 different data modes:
@@ -482,6 +489,9 @@
         multiple of this tuning parameter if the stripe size is not set in the
         ext4 superblock
 
+  mb_max_inode_prealloc
+        The maximum length of per-inode ext4_prealloc_space list.
+
   mb_max_to_scan
         The maximum number of extents the multiblock allocator will search to
         find the best extent.
@@ -522,21 +532,21 @@
 Ioctls
 ======
 
-There is some Ext4 specific functionality which can be accessed by applications
-through the system call interfaces. The list of all Ext4 specific ioctls are
-shown in the table below.
+Ext4 implements various ioctls which can be used by applications to access
+ext4-specific functionality. An incomplete list of these ioctls is shown in the
+table below. This list includes truly ext4-specific ioctls (``EXT4_IOC_*``) as
+well as ioctls that may have been ext4-specific originally but are now supported
+by some other filesystem(s) too (``FS_IOC_*``).
 
-Table of Ext4 specific ioctls
+Table of Ext4 ioctls
 
-  EXT4_IOC_GETFLAGS
+  FS_IOC_GETFLAGS
         Get additional attributes associated with inode.  The ioctl argument is
-        an integer bitfield, with bit values described in ext4.h. This ioctl is
-        an alias for FS_IOC_GETFLAGS.
+        an integer bitfield, with bit values described in ext4.h.
 
-  EXT4_IOC_SETFLAGS
+  FS_IOC_SETFLAGS
         Set additional attributes associated with inode.  The ioctl argument is
-        an integer bitfield, with bit values described in ext4.h. This ioctl is
-        an alias for FS_IOC_SETFLAGS.
+        an integer bitfield, with bit values described in ext4.h.
 
   EXT4_IOC_GETVERSION, EXT4_IOC_GETVERSION_OLD
         Get the inode i_generation number stored for each inode. The
@@ -611,7 +621,7 @@
 
 programs:	http://e2fsprogs.sourceforge.net/
 
-useful links:	http://fedoraproject.org/wiki/ext3-devel
+useful links:	https://fedoraproject.org/wiki/ext3-devel
 		http://www.bullopensource.org/ext4/
 		http://ext4.wiki.kernel.org/index.php/Main_Page
-		http://fedoraproject.org/wiki/Features/Ext4
+		https://fedoraproject.org/wiki/Features/Ext4
diff --git a/Documentation/admin-guide/hw-vuln/multihit.rst b/Documentation/admin-guide/hw-vuln/multihit.rst
index ba9988d..140e4ce 100644
--- a/Documentation/admin-guide/hw-vuln/multihit.rst
+++ b/Documentation/admin-guide/hw-vuln/multihit.rst
@@ -80,6 +80,10 @@
        - The processor is not vulnerable.
      * - KVM: Mitigation: Split huge pages
        - Software changes mitigate this issue.
+     * - KVM: Mitigation: VMX unsupported
+       - KVM is not vulnerable because Virtual Machine Extensions (VMX) is not supported.
+     * - KVM: Mitigation: VMX disabled
+       - KVM is not vulnerable because Virtual Machine Extensions (VMX) is disabled.
      * - KVM: Vulnerable
        - The processor is vulnerable, but no mitigation enabled
 
diff --git a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
index 47b1b3a..3b1ce68 100644
--- a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
+++ b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
@@ -14,7 +14,7 @@
 to MDS attacks.
 
 Affected processors
---------------------
+-------------------
 Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED may
 be affected.
 
@@ -59,7 +59,7 @@
 
 
 Mitigation mechanism
--------------------
+--------------------
 Intel will release microcode updates that modify the RDRAND, RDSEED, and
 EGETKEY instructions to overwrite secret special register data in the shared
 staging buffer before the secret data can be accessed by another logical
@@ -118,7 +118,7 @@
   ============= =============================================================
 
 SRBDS System Information
------------------------
+------------------------
 The Linux kernel provides vulnerability status information through sysfs.  For
 SRBDS this can be accessed by the following sysfs file:
 /sys/devices/system/cpu/vulnerabilities/srbds
diff --git a/Documentation/admin-guide/index.rst b/Documentation/admin-guide/index.rst
index 58c7f9f..ed1cf94 100644
--- a/Documentation/admin-guide/index.rst
+++ b/Documentation/admin-guide/index.rst
@@ -41,6 +41,7 @@
    init
    kdump/index
    perf/index
+   pstore-blk
 
 This is the beginning of a section with information of interest to
 application developers.  Documents covering various aspects of the kernel
diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst
index e4ee8b2..2baad0b 100644
--- a/Documentation/admin-guide/kdump/vmcoreinfo.rst
+++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst
@@ -93,6 +93,11 @@
 similar to the mem_map variable, both of them are used to translate an
 address.
 
+MAX_PHYSMEM_BITS
+----------------
+
+Defines the maximum supported physical address space memory.
+
 page
 ----
 
@@ -399,6 +404,17 @@
 The mask to extract the Pointer Authentication Code from a kernel virtual
 address.
 
+TCR_EL1.T1SZ
+------------
+
+Indicates the size offset of the memory region addressed by TTBR1_EL1.
+The region size is 2^(64-T1SZ) bytes.
+
+TTBR1_EL1 is the table base address register specified by ARMv8-A
+architecture which is used to lookup the page-tables for the Virtual
+addresses in the higher VA range (refer to ARMv8 ARM document for
+more details).
+
 arm
 ===
 
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index fb95fad..a106874 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -703,6 +703,11 @@
 	cpufreq.off=1	[CPU_FREQ]
 			disable the cpufreq sub-system
 
+	cpufreq.default_governor=
+			[CPU_FREQ] Name of the default cpufreq governor or
+			policy to use. This governor must be registered in the
+			kernel before the cpufreq driver probes.
+
 	cpu_init_udelay=N
 			[X86] Delay for N microsec between assert and de-assert
 			of APIC INIT to start processors.  This delay occurs
@@ -719,7 +724,7 @@
 			memory region [offset, offset + size] for that kernel
 			image. If '@offset' is omitted, then a suitable offset
 			is selected automatically.
-			[KNL, x86_64] select a region under 4G first, and
+			[KNL, X86-64] Select a region under 4G first, and
 			fall back to reserve region above 4G when '@offset'
 			hasn't been specified.
 			See Documentation/admin-guide/kdump/kdump.rst for further details.
@@ -732,14 +737,14 @@
 			Documentation/admin-guide/kdump/kdump.rst for an example.
 
 	crashkernel=size[KMG],high
-			[KNL, x86_64] range could be above 4G. Allow kernel
+			[KNL, X86-64] range could be above 4G. Allow kernel
 			to allocate physical memory region from top, so could
 			be above 4G if system have more than 4G ram installed.
 			Otherwise memory region will be allocated below 4G, if
 			available.
 			It will be ignored if crashkernel=X is specified.
 	crashkernel=size[KMG],low
-			[KNL, x86_64] range under 4G. When crashkernel=X,high
+			[KNL, X86-64] range under 4G. When crashkernel=X,high
 			is passed, kernel could allocate physical memory region
 			above 4G, that cause second kernel crash on system
 			that require some amount of low memory, e.g. swiotlb
@@ -827,6 +832,21 @@
 			useful to also enable the page_owner functionality.
 			on: enable the feature
 
+	debugfs=    	[KNL] This parameter enables what is exposed to userspace
+			and debugfs internal clients.
+			Format: { on, no-mount, off }
+			on: 	All functions are enabled.
+			no-mount:
+				Filesystem is not registered but kernel clients can
+			        access APIs and a crashkernel can be used to read
+				its content. There is nothing to mount.
+			off: 	Filesystem is not registered and clients
+			        get a -EPERM as result when trying to register files
+				or directories within debugfs.
+				This is equivalent of the runtime functionality if
+				debugfs was not enabled in the kernel at all.
+			Default value is set in build-time with a kernel configuration.
+
 	debugpat	[X86] Enable PAT debugging
 
 	decnet.addr=	[HW,NET]
@@ -896,6 +916,10 @@
 	disable_radix	[PPC]
 			Disable RADIX MMU mode on POWER9
 
+	radix_hcall_invalidate=on  [PPC/PSERIES]
+			Disable RADIX GTSE feature and use hcall for TLB
+			invalidate.
+
 	disable_tlbie	[PPC]
 			Disable TLBIE instruction. Currently does not work
 			with KVM, with HASH MMU, or with coherent accelerators.
@@ -1207,24 +1231,23 @@
 			Format: {"off" | "on" | "skip[mbr]"}
 
 	efi=		[EFI]
-			Format: { "old_map", "nochunk", "noruntime", "debug",
-				  "nosoftreserve", "disable_early_pci_dma",
-				  "no_disable_early_pci_dma" }
-			old_map [X86-64]: switch to the old ioremap-based EFI
-			runtime services mapping. [Needs CONFIG_X86_UV=y]
+			Format: { "debug", "disable_early_pci_dma",
+				  "nochunk", "noruntime", "nosoftreserve",
+				  "novamap", "no_disable_early_pci_dma" }
+			debug: enable misc debug output.
+			disable_early_pci_dma: disable the busmaster bit on all
+			PCI bridges while in the EFI boot stub.
 			nochunk: disable reading files in "chunks" in the EFI
 			boot stub, as chunking can cause problems with some
 			firmware implementations.
 			noruntime : disable EFI runtime services support
-			debug: enable misc debug output
 			nosoftreserve: The EFI_MEMORY_SP (Specific Purpose)
 			attribute may cause the kernel to reserve the
 			memory range for a memory mapping driver to
 			claim. Specify efi=nosoftreserve to disable this
 			reservation and treat the memory by its base type
 			(i.e. EFI_CONVENTIONAL_MEMORY / "System RAM").
-			disable_early_pci_dma: Disable the busmaster bit on all
-			PCI bridges while in the EFI boot stub
+			novamap: do not call SetVirtualAddressMap().
 			no_disable_early_pci_dma: Leave the busmaster bit set
 			on all PCI bridges while in the EFI boot stub
 
@@ -1401,7 +1424,7 @@
 
 	gamma=		[HW,DRM]
 
-	gart_fix_e820=	[X86_64] disable the fix e820 for K8 GART
+	gart_fix_e820=	[X86-64] disable the fix e820 for K8 GART
 			Format: off | on
 			default: on
 
@@ -1788,7 +1811,7 @@
 			Format: 0 | 1
 			Default set by CONFIG_INIT_ON_FREE_DEFAULT_ON.
 
-	init_pkru=	[x86] Specify the default memory protection keys rights
+	init_pkru=	[X86] Specify the default memory protection keys rights
 			register contents for all processes.  0x55555554 by
 			default (disallow access to all but pkey 0).  Can
 			override in debugfs after boot.
@@ -1796,7 +1819,7 @@
 	inport.irq=	[HW] Inport (ATI XL and Microsoft) busmouse driver
 			Format: <irq>
 
-	int_pln_enable	[x86] Enable power limit notification interrupt
+	int_pln_enable	[X86] Enable power limit notification interrupt
 
 	integrity_audit=[IMA]
 			Format: { "0" | "1" }
@@ -1814,7 +1837,7 @@
 			bypassed by not enabling DMAR with this option. In
 			this case, gfx device will use physical address for
 			DMA.
-		forcedac [x86_64]
+		forcedac [X86-64]
 			With this option iommu will not optimize to look
 			for io virtual address below 32-bit forcing dual
 			address cycle on pci bus for cards supporting greater
@@ -1899,7 +1922,7 @@
 		strict	regions from userspace.
 		relaxed
 
-	iommu=		[x86]
+	iommu=		[X86]
 		off
 		force
 		noforce
@@ -1909,8 +1932,8 @@
 		merge
 		nomerge
 		soft
-		pt		[x86]
-		nopt		[x86]
+		pt		[X86]
+		nopt		[X86]
 		nobypass	[PPC/POWERNV]
 			Disable IOMMU bypass, using IOMMU for PCI devices.
 
@@ -2053,21 +2076,21 @@
 
 	iucv=		[HW,NET]
 
-	ivrs_ioapic	[HW,X86_64]
+	ivrs_ioapic	[HW,X86-64]
 			Provide an override to the IOAPIC-ID<->DEVICE-ID
 			mapping provided in the IVRS ACPI table. For
 			example, to map IOAPIC-ID decimal 10 to
 			PCI device 00:14.0 write the parameter as:
 				ivrs_ioapic[10]=00:14.0
 
-	ivrs_hpet	[HW,X86_64]
+	ivrs_hpet	[HW,X86-64]
 			Provide an override to the HPET-ID<->DEVICE-ID
 			mapping provided in the IVRS ACPI table. For
 			example, to map HPET-ID decimal 0 to
 			PCI device 00:14.0 write the parameter as:
 				ivrs_hpet[0]=00:14.0
 
-	ivrs_acpihid	[HW,X86_64]
+	ivrs_acpihid	[HW,X86-64]
 			Provide an override to the ACPI-HID:UID<->DEVICE-ID
 			mapping provided in the IVRS ACPI table. For
 			example, to map UART-HID:UID AMD0020:0 to
@@ -2344,7 +2367,7 @@
 	lapic		[X86-32,APIC] Enable the local APIC even if BIOS
 			disabled it.
 
-	lapic=		[x86,APIC] "notscdeadline" Do not use TSC deadline
+	lapic=		[X86,APIC] "notscdeadline" Do not use TSC deadline
 			value for LAPIC timer one-shot implementation. Default
 			back to the programmable timer unit in the LAPIC.
 
@@ -2786,7 +2809,7 @@
 			touchscreen support is not enabled in the mainstream
 			kernel as of 2.6.30, a preliminary port can be found
 			in the "bleeding edge" mini2440 support kernel at
-			http://repo.or.cz/w/linux-2.6/mini2440.git
+			https://repo.or.cz/w/linux-2.6/mini2440.git
 
 	mitigations=
 			[X86,PPC,S390,ARM64] Control optional mitigations for
@@ -3079,6 +3102,8 @@
 	no5lvl		[X86-64] Disable 5-level paging mode. Forces
 			kernel to use 4-level paging instead.
 
+	nofsgsbase	[X86] Disables FSGSBASE instructions.
+
 	no_console_suspend
 			[HW] Never suspend the console
 			Disable suspending of consoles during suspend and
@@ -3160,12 +3185,12 @@
 			register save and restore. The kernel will only save
 			legacy floating-point registers on task switch.
 
-	nohugeiomap	[KNL,x86,PPC] Disable kernel huge I/O mappings.
+	nohugeiomap	[KNL,X86,PPC] Disable kernel huge I/O mappings.
 
 	nosmt		[KNL,S390] Disable symmetric multithreading (SMT).
 			Equivalent to smt=1.
 
-			[KNL,x86] Disable symmetric multithreading (SMT).
+			[KNL,X86] Disable symmetric multithreading (SMT).
 			nosmt=force: Force disable SMT, cannot be undone
 				     via the sysfs control file.
 
@@ -3927,7 +3952,7 @@
 	pt.		[PARIDE]
 			See Documentation/admin-guide/blockdev/paride.rst.
 
-	pti=		[X86_64] Control Page Table Isolation of user and
+	pti=		[X86-64] Control Page Table Isolation of user and
 			kernel address spaces.  Disabling this feature
 			removes hardening, but improves performance of
 			system calls and interrupts.
@@ -3939,7 +3964,7 @@
 
 			Not specifying this option is equivalent to pti=auto.
 
-	nopti		[X86_64]
+	nopti		[X86-64]
 			Equivalent to pti=off
 
 	pty.legacy_count=
@@ -4038,6 +4063,14 @@
 			latencies, which will choose a value aligned
 			with the appropriate hardware boundaries.
 
+	rcutree.rcu_min_cached_objs= [KNL]
+			Minimum number of objects which are cached and
+			maintained per one CPU. Object size is equal
+			to PAGE_SIZE. The cache allows to reduce the
+			pressure to page allocator, also it makes the
+			whole algorithm to behave better in low memory
+			condition.
+
 	rcutree.jiffies_till_first_fqs= [KNL]
 			Set delay from grace-period initialization to
 			first attempt to force quiescent states.
@@ -4258,6 +4291,20 @@
 			Set time (jiffies) between CPU-hotplug operations,
 			or zero to disable CPU-hotplug testing.
 
+	rcutorture.read_exit= [KNL]
+			Set the number of read-then-exit kthreads used
+			to test the interaction of RCU updaters and
+			task-exit processing.
+
+	rcutorture.read_exit_burst= [KNL]
+			The number of times in a given read-then-exit
+			episode that a set of read-then-exit kthreads
+			is spawned.
+
+	rcutorture.read_exit_delay= [KNL]
+			The delay, in seconds, between successive
+			read-then-exit testing episodes.
+
 	rcutorture.shuffle_interval= [KNL]
 			Set task-shuffle interval (s).  Shuffling tasks
 			allows some CPUs to go into dyntick-idle mode
@@ -4407,6 +4454,45 @@
 			      reboot_cpu is s[mp]#### with #### being the processor
 					to be used for rebooting.
 
+	refscale.holdoff= [KNL]
+			Set test-start holdoff period.  The purpose of
+			this parameter is to delay the start of the
+			test until boot completes in order to avoid
+			interference.
+
+	refscale.loops= [KNL]
+			Set the number of loops over the synchronization
+			primitive under test.  Increasing this number
+			reduces noise due to loop start/end overhead,
+			but the default has already reduced the per-pass
+			noise to a handful of picoseconds on ca. 2020
+			x86 laptops.
+
+	refscale.nreaders= [KNL]
+			Set number of readers.  The default value of -1
+			selects N, where N is roughly 75% of the number
+			of CPUs.  A value of zero is an interesting choice.
+
+	refscale.nruns= [KNL]
+			Set number of runs, each of which is dumped onto
+			the console log.
+
+	refscale.readdelay= [KNL]
+			Set the read-side critical-section duration,
+			measured in microseconds.
+
+	refscale.scale_type= [KNL]
+			Specify the read-protection implementation to test.
+
+	refscale.shutdown= [KNL]
+			Shut down the system at the end of the performance
+			test.  This defaults to 1 (shut it down) when
+			rcuperf is built into the kernel and to 0 (leave
+			it running) when rcuperf is built as a module.
+
+	refscale.verbose= [KNL]
+			Enable additional printk() statements.
+
 	relax_domain_level=
 			[KNL, SMP] Set scheduler's default relax_domain_level.
 			See Documentation/admin-guide/cgroup-v1/cpusets.rst.
@@ -4604,7 +4690,7 @@
 			fragmentation.  Defaults to 1 for systems with
 			more than 32MB of RAM, 0 otherwise.
 
-	slub_debug[=options[,slabs]]	[MM, SLUB]
+	slub_debug[=options[,slabs][;[options[,slabs]]...]	[MM, SLUB]
 			Enabling slub_debug allows one to determine the
 			culprit if slab objects become corrupted. Enabling
 			slub_debug can create guard zones around objects and
@@ -5082,6 +5168,13 @@
 			Prevent the CPU-hotplug component of torturing
 			until after init has spawned.
 
+	torture.ftrace_dump_at_shutdown= [KNL]
+			Dump the ftrace buffer at torture-test shutdown,
+			even if there were no errors.  This can be a
+			very costly operation when many torture tests
+			are running concurrently, especially on systems
+			with rotating-rust storage.
+
 	tp720=		[HW,PS2]
 
 	tpm_suspend_pcr=[HW,TPM]
@@ -5712,8 +5805,9 @@
 			panic() code such as dumping handler.
 
 	xen_nopvspin	[X86,XEN]
-			Disables the ticketlock slowpath using Xen PV
-			optimizations.
+			Disables the qspinlock slowpath using Xen PV optimizations.
+			This parameter is obsoleted by "nopvspin" parameter, which
+			has equivalent effect for XEN platform.
 
 	xen_nopv	[X86]
 			Disables the PV optimizations forcing the HVM guest to
@@ -5739,6 +5833,11 @@
 			as generic guest with no PV drivers. Currently support
 			XEN HVM, KVM, HYPER_V and VMWARE guest.
 
+	nopvspin	[X86,XEN,KVM]
+			Disables the qspinlock slow path using PV optimizations
+			which allow the hypervisor to 'idle' the guest on lock
+			contention.
+
 	xirc2ps_cs=	[NET,PCMCIA]
 			Format:
 			<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
diff --git a/Documentation/admin-guide/laptops/disk-shock-protection.rst b/Documentation/admin-guide/laptops/disk-shock-protection.rst
index e97c5f7..22c7ec3 100644
--- a/Documentation/admin-guide/laptops/disk-shock-protection.rst
+++ b/Documentation/admin-guide/laptops/disk-shock-protection.rst
@@ -135,7 +135,7 @@
 for use. Please feel free to add projects that have been the victims
 of my ignorance.
 
-- http://www.thinkwiki.org/wiki/HDAPS
+- https://www.thinkwiki.org/wiki/HDAPS
 
   See this page for information about Linux support of the hard disk
   active protection system as implemented in IBM/Lenovo Thinkpads.
diff --git a/Documentation/admin-guide/laptops/sonypi.rst b/Documentation/admin-guide/laptops/sonypi.rst
index c6eaaf4..190da12 100644
--- a/Documentation/admin-guide/laptops/sonypi.rst
+++ b/Documentation/admin-guide/laptops/sonypi.rst
@@ -151,7 +151,7 @@
 	  different way to adjust the backlighting of the screen. There
 	  is a userspace utility to adjust the brightness on those models,
 	  which can be downloaded from
-	  http://www.acc.umu.se/~erikw/program/smartdimmer-0.1.tar.bz2
+	  https://www.acc.umu.se/~erikw/program/smartdimmer-0.1.tar.bz2
 
 	- since all development was done by reverse engineering, there is
 	  *absolutely no guarantee* that this driver will not crash your
diff --git a/Documentation/admin-guide/laptops/thinkpad-acpi.rst b/Documentation/admin-guide/laptops/thinkpad-acpi.rst
index 822907d..5fe1ade 100644
--- a/Documentation/admin-guide/laptops/thinkpad-acpi.rst
+++ b/Documentation/admin-guide/laptops/thinkpad-acpi.rst
@@ -50,6 +50,7 @@
 	- WAN enable and disable
 	- UWB enable and disable
 	- LCD Shadow (PrivacyGuard) enable and disable
+	- Lap mode sensor
 
 A compatibility table by model and feature is maintained on the web
 site, http://ibm-acpi.sf.net/. I appreciate any success or failure
@@ -904,7 +905,7 @@
 The mapping of thermal sensors to physical locations varies depending on
 system-board model (and thus, on ThinkPad model).
 
-http://thinkwiki.org/wiki/Thermal_Sensors is a public wiki page that
+https://thinkwiki.org/wiki/Thermal_Sensors is a public wiki page that
 tries to track down these locations for various models.
 
 Most (newer?) models seem to follow this pattern:
@@ -925,7 +926,7 @@
 - 3:  Internal HDD
 
 For the T43, T43/p (source: Shmidoax/Thinkwiki.org)
-http://thinkwiki.org/wiki/Thermal_Sensors#ThinkPad_T43.2C_T43p
+https://thinkwiki.org/wiki/Thermal_Sensors#ThinkPad_T43.2C_T43p
 
 - 2:  System board, left side (near PCMCIA slot), reported as HDAPS temp
 - 3:  PCMCIA slot
@@ -935,7 +936,7 @@
 - 11: Power regulator, underside of system board, below F2 key
 
 The A31 has a very atypical layout for the thermal sensors
-(source: Milos Popovic, http://thinkwiki.org/wiki/Thermal_Sensors#ThinkPad_A31)
+(source: Milos Popovic, https://thinkwiki.org/wiki/Thermal_Sensors#ThinkPad_A31)
 
 - 1:  CPU
 - 2:  Main Battery: main sensor
@@ -1432,6 +1433,20 @@
 on the feature, restricting the viewing angles.
 
 
+DYTC Lapmode sensor
+-------------------
+
+sysfs: dytc_lapmode
+
+Newer thinkpads and mobile workstations have the ability to determine if
+the device is in deskmode or lapmode. This feature is used by user space
+to decide if WWAN transmission can be increased to maximum power and is
+also useful for understanding the different thermal modes available as
+they differ between desk and lap mode.
+
+The property is read-only. If the platform doesn't have support the sysfs
+class is not created.
+
 EXPERIMENTAL: UWB
 -----------------
 
@@ -1470,6 +1485,23 @@
 review the laptop's user guide:
 http://www.lenovo.com/shop/americas/content/user_guides/x1carbon_2_ug_en.pdf
 
+Battery charge control
+----------------------
+
+sysfs attributes:
+/sys/class/power_supply/BAT*/charge_control_{start,end}_threshold
+
+These two attributes are created for those batteries that are supported by the
+driver. They enable the user to control the battery charge thresholds of the
+given battery. Both values may be read and set. `charge_control_start_threshold`
+accepts an integer between 0 and 99 (inclusive); this value represents a battery
+percentage level, below which charging will begin. `charge_control_end_threshold`
+accepts an integer between 1 and 100 (inclusive); this value represents a battery
+percentage level, above which charging will stop.
+
+The exact semantics of the attributes may be found in
+Documentation/ABI/testing/sysfs-class-power.
+
 Multiple Commands, Module Parameters
 ------------------------------------
 
diff --git a/Documentation/admin-guide/md.rst b/Documentation/admin-guide/md.rst
index d973d46..cc8781b 100644
--- a/Documentation/admin-guide/md.rst
+++ b/Documentation/admin-guide/md.rst
@@ -426,6 +426,10 @@
      The accepted values when writing to this file are ``ppl`` and ``resync``,
      used to enable and disable PPL.
 
+  uuid
+     This indicates the UUID of the array in the following format:
+     xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+
 
 As component devices are added to an md array, they appear in the ``md``
 directory as new directories named::
diff --git a/Documentation/admin-guide/media/building.rst b/Documentation/admin-guide/media/building.rst
index c898e3a..2d660b7 100644
--- a/Documentation/admin-guide/media/building.rst
+++ b/Documentation/admin-guide/media/building.rst
@@ -90,7 +90,7 @@
        Those GPU-specific drivers are selected via the ``Graphics support``
        menu, under ``Device Drivers``.
 
-       When a GPU driver supports supports HDMI CEC, it will automatically
+       When a GPU driver supports HDMI CEC, it will automatically
        enable the CEC core support at the media subsystem.
 
 Media dependencies
@@ -244,7 +244,7 @@
    If you have an hybrid card, you may need to enable both ``Analog TV``
    and ``Digital TV`` at the menu.
 
-When using this option, the defaults for the the media support core
+When using this option, the defaults for the media support core
 functionality are usually good enough to provide the basic functionality
 for the driver. Yet, you could manually enable some desired extra (optional)
 functionality using the settings under each of the following
diff --git a/Documentation/admin-guide/media/fimc.rst b/Documentation/admin-guide/media/fimc.rst
index 0b8ddc4..56b149d 100644
--- a/Documentation/admin-guide/media/fimc.rst
+++ b/Documentation/admin-guide/media/fimc.rst
@@ -2,7 +2,7 @@
 
 .. include:: <isonum.txt>
 
-The Samsung S5P/EXYNOS4 FIMC driver
+The Samsung S5P/Exynos4 FIMC driver
 ===================================
 
 Copyright |copy| 2012 - 2013 Samsung Electronics Co., Ltd.
@@ -19,7 +19,7 @@
 Supported SoCs
 --------------
 
-S5PC100 (mem-to-mem only), S5PV210, EXYNOS4210
+S5PC100 (mem-to-mem only), S5PV210, Exynos4210
 
 Supported features
 ------------------
@@ -45,7 +45,7 @@
 ~~~~~~~~~~~~~~~~~~~~~~
 
 The driver supports Media Controller API as defined at :ref:`media_controller`.
-The media device driver name is "SAMSUNG S5P FIMC".
+The media device driver name is "Samsung S5P FIMC".
 
 The purpose of this interface is to allow changing assignment of FIMC instances
 to the SoC peripheral camera input at runtime and optionally to control internal
diff --git a/Documentation/admin-guide/media/vivid.rst b/Documentation/admin-guide/media/vivid.rst
index 52e57b7..6d7175f 100644
--- a/Documentation/admin-guide/media/vivid.rst
+++ b/Documentation/admin-guide/media/vivid.rst
@@ -293,6 +293,15 @@
 		- 0: vmalloc
 		- 1: dma-contig
 
+- cache_hints:
+
+	specifies if the device should set queues' user-space cache and memory
+	consistency hint capability (V4L2_BUF_CAP_SUPPORTS_MMAP_CACHE_HINTS).
+	The hints are valid only when using MMAP streaming I/O. Default is 0.
+
+		- 0: forbid hints
+		- 1: allow hints
+
 Taken together, all these module options allow you to precisely customize
 the driver behavior and test your application with all sorts of permutations.
 It is also very suitable to emulate hardware that is not yet available, e.g.
diff --git a/Documentation/admin-guide/mm/concepts.rst b/Documentation/admin-guide/mm/concepts.rst
index c2531b1..fa0974f 100644
--- a/Documentation/admin-guide/mm/concepts.rst
+++ b/Documentation/admin-guide/mm/concepts.rst
@@ -35,7 +35,7 @@
 protection and controlled sharing of data between processes.
 
 With virtual memory, each and every memory access uses a virtual
-address. When the CPU decodes the an instruction that reads (or
+address. When the CPU decodes an instruction that reads (or
 writes) from (or to) the system memory, it translates the `virtual`
 address encoded in that instruction to a `physical` address that the
 memory controller can understand.
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index 5026e58..015a5f7 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -101,37 +101,48 @@
 page size may be selected with the "default_hugepagesz=<size>" boot parameter.
 
 Hugetlb boot command line parameter semantics
-hugepagesz - Specify a huge page size.  Used in conjunction with hugepages
+
+hugepagesz
+	Specify a huge page size.  Used in conjunction with hugepages
 	parameter to preallocate a number of huge pages of the specified
 	size.  Hence, hugepagesz and hugepages are typically specified in
-	pairs such as:
+	pairs such as::
+
 		hugepagesz=2M hugepages=512
+
 	hugepagesz can only be specified once on the command line for a
 	specific huge page size.  Valid huge page sizes are architecture
 	dependent.
-hugepages - Specify the number of huge pages to preallocate.  This typically
+hugepages
+	Specify the number of huge pages to preallocate.  This typically
 	follows a valid hugepagesz or default_hugepagesz parameter.  However,
 	if hugepages is the first or only hugetlb command line parameter it
 	implicitly specifies the number of huge pages of default size to
 	allocate.  If the number of huge pages of default size is implicitly
 	specified, it can not be overwritten by a hugepagesz,hugepages
 	parameter pair for the default size.
-	For example, on an architecture with 2M default huge page size:
+
+	For example, on an architecture with 2M default huge page size::
+
 		hugepages=256 hugepagesz=2M hugepages=512
+
 	will result in 256 2M huge pages being allocated and a warning message
 	indicating that the hugepages=512 parameter is ignored.  If a hugepages
 	parameter is preceded by an invalid hugepagesz parameter, it will
 	be ignored.
-default_hugepagesz - Specify the default huge page size.  This parameter can
+default_hugepagesz
+	pecify the default huge page size.  This parameter can
 	only be specified once on the command line.  default_hugepagesz can
 	optionally be followed by the hugepages parameter to preallocate a
 	specific number of huge pages of default size.  The number of default
 	sized huge pages to preallocate can also be implicitly specified as
 	mentioned in the hugepages section above.  Therefore, on an
-	architecture with 2M default huge page size:
+	architecture with 2M default huge page size::
+
 		hugepages=256
 		default_hugepagesz=2M hugepages=256
 		hugepages=256 default_hugepagesz=2M
+
 	will all result in 256 2M huge pages being allocated.  Valid default
 	huge page size is architecture dependent.
 
diff --git a/Documentation/admin-guide/mm/index.rst b/Documentation/admin-guide/mm/index.rst
index 11db464..cd727cf 100644
--- a/Documentation/admin-guide/mm/index.rst
+++ b/Documentation/admin-guide/mm/index.rst
@@ -31,6 +31,7 @@
    idle_page_tracking
    ksm
    memory-hotplug
+   nommu-mmap
    numa_memory_policy
    numaperf
    pagemap
diff --git a/Documentation/admin-guide/mm/ksm.rst b/Documentation/admin-guide/mm/ksm.rst
index 874eb0c..97d8167 100644
--- a/Documentation/admin-guide/mm/ksm.rst
+++ b/Documentation/admin-guide/mm/ksm.rst
@@ -9,7 +9,7 @@
 
 KSM is a memory-saving de-duplication feature, enabled by CONFIG_KSM=y,
 added to the Linux kernel in 2.6.32.  See ``mm/ksm.c`` for its implementation,
-and http://lwn.net/Articles/306704/ and http://lwn.net/Articles/330589/
+and http://lwn.net/Articles/306704/ and https://lwn.net/Articles/330589/
 
 KSM was originally developed for use with KVM (where it was known as
 Kernel Shared Memory), to fit more virtual machines into physical memory,
@@ -52,7 +52,7 @@
 If KSM is not configured into the running kernel, madvise MADV_MERGEABLE
 and MADV_UNMERGEABLE simply fail with EINVAL.  If the running kernel was
 built with CONFIG_KSM=y, those calls will normally succeed: even if the
-the KSM daemon is not currently running, MADV_MERGEABLE still registers
+KSM daemon is not currently running, MADV_MERGEABLE still registers
 the range for whenever the KSM daemon is started; even if the range
 cannot contain any pages which KSM could actually merge; even if
 MADV_UNMERGEABLE is applied to a range which was never MADV_MERGEABLE.
diff --git a/Documentation/nommu-mmap.txt b/Documentation/admin-guide/mm/nommu-mmap.rst
similarity index 100%
rename from Documentation/nommu-mmap.txt
rename to Documentation/admin-guide/mm/nommu-mmap.rst
diff --git a/Documentation/admin-guide/mm/numaperf.rst b/Documentation/admin-guide/mm/numaperf.rst
index a80c3c3..4d69ef1 100644
--- a/Documentation/admin-guide/mm/numaperf.rst
+++ b/Documentation/admin-guide/mm/numaperf.rst
@@ -129,7 +129,7 @@
 
 	/sys/devices/system/node/nodeX/memory_side_cache/
 
-If that directory is not present, the system either does not not provide
+If that directory is not present, the system either does not provide
 a memory-side cache, or that information is not accessible to the kernel.
 
 The attributes for each level of cache is provided under its cache
diff --git a/Documentation/admin-guide/nfs/nfs-client.rst b/Documentation/admin-guide/nfs/nfs-client.rst
index c4b777c..6adb645 100644
--- a/Documentation/admin-guide/nfs/nfs-client.rst
+++ b/Documentation/admin-guide/nfs/nfs-client.rst
@@ -65,8 +65,8 @@
 attribute. See `RFC3530 Section 6: Filesystem Migration and Replication`_ and
 `Implementation Guide for Referrals in NFSv4`_.
 
-.. _RFC3530 Section 6\: Filesystem Migration and Replication: http://tools.ietf.org/html/rfc3530#section-6
-.. _Implementation Guide for Referrals in NFSv4: http://tools.ietf.org/html/draft-ietf-nfsv4-referrals-00
+.. _RFC3530 Section 6\: Filesystem Migration and Replication: https://tools.ietf.org/html/rfc3530#section-6
+.. _Implementation Guide for Referrals in NFSv4: https://tools.ietf.org/html/draft-ietf-nfsv4-referrals-00
 
 The fs_locations information can take the form of either an ip address and
 a path, or a DNS hostname and a path. The latter requires the NFS client to
diff --git a/Documentation/admin-guide/nfs/nfs-rdma.rst b/Documentation/admin-guide/nfs/nfs-rdma.rst
index ef0f367..f137485 100644
--- a/Documentation/admin-guide/nfs/nfs-rdma.rst
+++ b/Documentation/admin-guide/nfs/nfs-rdma.rst
@@ -65,7 +65,7 @@
   If the version is less than 1.1.2 or the command does not exist,
   you should install the latest version of nfs-utils.
 
-  Download the latest package from: http://www.kernel.org/pub/linux/utils/nfs
+  Download the latest package from: https://www.kernel.org/pub/linux/utils/nfs
 
   Uncompress the package and follow the installation instructions.
 
diff --git a/Documentation/admin-guide/nfs/nfsroot.rst b/Documentation/admin-guide/nfs/nfsroot.rst
index c677207..135218f 100644
--- a/Documentation/admin-guide/nfs/nfsroot.rst
+++ b/Documentation/admin-guide/nfs/nfsroot.rst
@@ -264,7 +264,7 @@
      	access to the floppy drive device, /dev/fd0
 
      	For more information on syslinux, including how to create bootdisks
-     	for prebuilt kernels, see http://syslinux.zytor.com/
+     	for prebuilt kernels, see https://syslinux.zytor.com/
 
 	.. note::
 		Previously it was possible to write a kernel directly to
@@ -292,7 +292,7 @@
 	  cdrecord dev=ATAPI:1,0,0 arch/x86/boot/image.iso
 
      	For more information on isolinux, including how to create bootdisks
-     	for prebuilt kernels, see http://syslinux.zytor.com/
+     	for prebuilt kernels, see https://syslinux.zytor.com/
 
 - Using LILO
 
@@ -346,7 +346,7 @@
 	see Documentation/admin-guide/serial-console.rst for more information.
 
 	For more information on isolinux, including how to create bootdisks
-	for prebuilt kernels, see http://syslinux.zytor.com/
+	for prebuilt kernels, see https://syslinux.zytor.com/
 
 
 
diff --git a/Documentation/admin-guide/nfs/pnfs-block-server.rst b/Documentation/admin-guide/nfs/pnfs-block-server.rst
index b00a2e7..20fe9f5 100644
--- a/Documentation/admin-guide/nfs/pnfs-block-server.rst
+++ b/Documentation/admin-guide/nfs/pnfs-block-server.rst
@@ -8,7 +8,7 @@
 to the clients to directly access the underlying block devices that are
 shared with the client.
 
-To use pNFS block layouts with with the Linux NFS server the exported file
+To use pNFS block layouts with the Linux NFS server the exported file
 system needs to support the pNFS block layouts (currently just XFS), and the
 file system must sit on shared storage (typically iSCSI) that is accessible
 to the clients in addition to the MDS.  As of now the file system needs to
diff --git a/Documentation/admin-guide/nfs/pnfs-scsi-server.rst b/Documentation/admin-guide/nfs/pnfs-scsi-server.rst
index d2f6ee5..b2eec22 100644
--- a/Documentation/admin-guide/nfs/pnfs-scsi-server.rst
+++ b/Documentation/admin-guide/nfs/pnfs-scsi-server.rst
@@ -9,7 +9,7 @@
 also hands out layouts to the clients so that they can directly access the
 underlying SCSI LUNs that are shared with the client.
 
-To use pNFS SCSI layouts with with the Linux NFS server, the exported file
+To use pNFS SCSI layouts with the Linux NFS server, the exported file
 system needs to support the pNFS SCSI layouts (currently just XFS), and the
 file system must sit on a SCSI LUN that is accessible to the clients in
 addition to the MDS.  As of now the file system needs to sit directly on the
diff --git a/Documentation/admin-guide/perf/arm-ccn.rst b/Documentation/admin-guide/perf/arm-ccn.rst
index 832b0c6..f62f7fe 100644
--- a/Documentation/admin-guide/perf/arm-ccn.rst
+++ b/Documentation/admin-guide/perf/arm-ccn.rst
@@ -27,7 +27,7 @@
 and "vc" (virtual channel ID).
 
 Crosspoint watchpoint-based events (special "event" value 0xfe)
-require "xp" and "vc" as as above plus "port" (device port index),
+require "xp" and "vc" as above plus "port" (device port index),
 "dir" (transmit/receive direction), comparator values ("cmp_l"
 and "cmp_h") and "mask", being index of the comparator mask.
 
diff --git a/Documentation/admin-guide/pm/cpufreq.rst b/Documentation/admin-guide/pm/cpufreq.rst
index 0c74a77..368e612 100644
--- a/Documentation/admin-guide/pm/cpufreq.rst
+++ b/Documentation/admin-guide/pm/cpufreq.rst
@@ -147,9 +147,9 @@
 
 The next major initialization step for a new policy object is to attach a
 scaling governor to it (to begin with, that is the default scaling governor
-determined by the kernel configuration, but it may be changed later
-via ``sysfs``).  First, a pointer to the new policy object is passed to the
-governor's ``->init()`` callback which is expected to initialize all of the
+determined by the kernel command line or configuration, but it may be changed
+later via ``sysfs``).  First, a pointer to the new policy object is passed to
+the governor's ``->init()`` callback which is expected to initialize all of the
 data structures necessary to handle the given policy and, possibly, to add
 a governor ``sysfs`` interface to it.  Next, the governor is started by
 invoking its ``->start()`` callback.
diff --git a/Documentation/admin-guide/pm/intel-speed-select.rst b/Documentation/admin-guide/pm/intel-speed-select.rst
index b2ca601..219f135 100644
--- a/Documentation/admin-guide/pm/intel-speed-select.rst
+++ b/Documentation/admin-guide/pm/intel-speed-select.rst
@@ -114,7 +114,7 @@
 Lock/Unlock status
 ~~~~~~~~~~~~~~~~~~
 
-Even if there are multiple performance profiles, it is possible that that they
+Even if there are multiple performance profiles, it is possible that they
 are locked. If they are locked, users cannot issue a command to change the
 performance state. It is possible that there is a BIOS setup to unlock or check
 with your system vendor.
@@ -883,7 +883,7 @@
         enable:success
 
 In this case, the option "-a" is optional. If set, it enables Intel(R) SST-TF
-feature and also sets the CPUs to high and and low priority using Intel Speed
+feature and also sets the CPUs to high and low priority using Intel Speed
 Select Technology Core Power (Intel(R) SST-CP) features. The CPU numbers passed
 with "-c" arguments are marked as high priority, including its siblings.
 
diff --git a/Documentation/admin-guide/pm/intel_pstate.rst b/Documentation/admin-guide/pm/intel_pstate.rst
index 39d80bc..5072e70 100644
--- a/Documentation/admin-guide/pm/intel_pstate.rst
+++ b/Documentation/admin-guide/pm/intel_pstate.rst
@@ -54,10 +54,13 @@
 Operation Modes
 ===============
 
-``intel_pstate`` can operate in three different modes: in the active mode with
-or without hardware-managed P-states support and in the passive mode.  Which of
-them will be in effect depends on what kernel command line options are used and
-on the capabilities of the processor.
+``intel_pstate`` can operate in two different modes, active or passive.  In the
+active mode, it uses its own internal performance scaling governor algorithm or
+allows the hardware to do preformance scaling by itself, while in the passive
+mode it responds to requests made by a generic ``CPUFreq`` governor implementing
+a certain performance scaling algorithm.  Which of them will be in effect
+depends on what kernel command line options are used and on the capabilities of
+the processor.
 
 Active Mode
 -----------
@@ -120,7 +123,9 @@
 internal P-state selection logic is expected to focus entirely on performance.
 
 This will override the EPP/EPB setting coming from the ``sysfs`` interface
-(see `Energy vs Performance Hints`_ below).
+(see `Energy vs Performance Hints`_ below).  Moreover, any attempts to change
+the EPP/EPB to a value different from 0 ("performance") via ``sysfs`` in this
+configuration will be rejected.
 
 Also, in this configuration the range of P-states available to the processor's
 internal P-state selection logic is always restricted to the upper boundary
@@ -194,10 +199,11 @@
 hardware-managed P-states (HWP) support.  It is always used if the
 ``intel_pstate=passive`` argument is passed to the kernel in the command line
 regardless of whether or not the given processor supports HWP.  [Note that the
-``intel_pstate=no_hwp`` setting implies ``intel_pstate=passive`` if it is used
-without ``intel_pstate=active``.]  Like in the active mode without HWP support,
-in this mode ``intel_pstate`` may refuse to work with processors that are not
-recognized by it.
+``intel_pstate=no_hwp`` setting causes the driver to start in the passive mode
+if it is not combined with ``intel_pstate=active``.]  Like in the active mode
+without HWP support, in this mode ``intel_pstate`` may refuse to work with
+processors that are not recognized by it if HWP is prevented from being enabled
+through the kernel command line.
 
 If the driver works in this mode, the ``scaling_driver`` policy attribute in
 ``sysfs`` for all ``CPUFreq`` policies contains the string "intel_cpufreq".
@@ -318,10 +324,9 @@
 
 For this reason, there is a list of supported processors in ``intel_pstate`` and
 the driver initialization will fail if the detected processor is not in that
-list, unless it supports the `HWP feature <Active Mode_>`_.  [The interface to
-obtain all of the information listed above is the same for all of the processors
-supporting the HWP feature, which is why they all are supported by
-``intel_pstate``.]
+list, unless it supports the HWP feature.  [The interface to obtain all of the
+information listed above is the same for all of the processors supporting the
+HWP feature, which is why ``intel_pstate`` works with all of them.]
 
 
 User Space Interface in ``sysfs``
@@ -425,11 +430,16 @@
 	as well as the per-policy ones) are then reset to their default
 	values, possibly depending on the target operation mode.]
 
-	That only is supported in some configurations, though (for example, if
-	the `HWP feature is enabled in the processor <Active Mode With HWP_>`_,
-	the operation mode of the driver cannot be changed), and if it is not
-	supported in the current configuration, writes to this attribute will
-	fail with an appropriate error.
+``energy_efficiency``
+	This attribute is only present on platforms with CPUs matching the Kaby
+	Lake or Coffee Lake desktop CPU model. By default, energy-efficiency
+	optimizations are disabled on these CPU models if HWP is enabled.
+	Enabling energy-efficiency optimizations may limit maximum operating
+	frequency with or without the HWP feature.  With HWP enabled, the
+	optimizations are done only in the turbo frequency range.  Without it,
+	they are done in the entire available frequency range.  Setting this
+	attribute to "1" enables the energy-efficiency optimizations and setting
+	to "0" disables them.
 
 Interpretation of Policy Attributes
 -----------------------------------
@@ -473,8 +483,8 @@
 	policy for the time interval between the last two invocations of the
 	driver's utilization update callback by the CPU scheduler for that CPU.
 
-One more policy attribute is present if the `HWP feature is enabled in the
-processor <Active Mode With HWP_>`_:
+One more policy attribute is present if the HWP feature is enabled in the
+processor:
 
 ``base_frequency``
 	Shows the base frequency of the CPU. Any frequency above this will be
@@ -515,11 +525,11 @@
 
  3. The global and per-policy limits can be set independently.
 
-If the `HWP feature is enabled in the processor <Active Mode With HWP_>`_, the
-resulting effective values are written into its registers whenever the limits
-change in order to request its internal P-state selection logic to always set
-P-states within these limits.  Otherwise, the limits are taken into account by
-scaling governors (in the `passive mode <Passive Mode_>`_) and by the driver
+In the `active mode with the HWP feature enabled <Active Mode With HWP_>`_, the
+resulting effective values are written into hardware registers whenever the
+limits change in order to request its internal P-state selection logic to always
+set P-states within these limits.  Otherwise, the limits are taken into account
+by scaling governors (in the `passive mode <Passive Mode_>`_) and by the driver
 every time before setting a new P-state for a CPU.
 
 Additionally, if the ``intel_pstate=per_cpu_perf_limits`` command line argument
@@ -530,12 +540,11 @@
 Energy vs Performance Hints
 ---------------------------
 
-If ``intel_pstate`` works in the `active mode with the HWP feature enabled
-<Active Mode With HWP_>`_ in the processor, additional attributes are present
-in every ``CPUFreq`` policy directory in ``sysfs``.  They are intended to allow
-user space to help ``intel_pstate`` to adjust the processor's internal P-state
-selection logic by focusing it on performance or on energy-efficiency, or
-somewhere between the two extremes:
+If the hardware-managed P-states (HWP) is enabled in the processor, additional
+attributes, intended to allow user space to help ``intel_pstate`` to adjust the
+processor's internal P-state selection logic by focusing it on performance or on
+energy-efficiency, or somewhere between the two extremes, are present in every
+``CPUFreq`` policy directory in ``sysfs``.  They are :
 
 ``energy_performance_preference``
 	Current value of the energy vs performance hint for the given policy
@@ -554,7 +563,11 @@
 Strings written to the ``energy_performance_preference`` attribute are
 internally translated to integer values written to the processor's
 Energy-Performance Preference (EPP) knob (if supported) or its
-Energy-Performance Bias (EPB) knob.
+Energy-Performance Bias (EPB) knob. It is also possible to write a positive
+integer value between 0 to 255, if the EPP feature is present. If the EPP
+feature is not present, writing integer value to this attribute is not
+supported. In this case, user can use the
+"/sys/devices/system/cpu/cpu*/power/energy_perf_bias" interface.
 
 [Note that tasks may by migrated from one CPU to another by the scheduler's
 load-balancing algorithm and if different energy vs performance hints are
@@ -635,12 +648,14 @@
 	Do not register ``intel_pstate`` as the scaling driver even if the
 	processor is supported by it.
 
+``active``
+	Register ``intel_pstate`` in the `active mode <Active Mode_>`_ to start
+	with.
+
 ``passive``
 	Register ``intel_pstate`` in the `passive mode <Passive Mode_>`_ to
 	start with.
 
-	This option implies the ``no_hwp`` one described below.
-
 ``force``
 	Register ``intel_pstate`` as the scaling driver instead of
 	``acpi-cpufreq`` even if the latter is preferred on the given system.
@@ -655,13 +670,12 @@
 	driver is used instead of ``acpi-cpufreq``.
 
 ``no_hwp``
-	Do not enable the `hardware-managed P-states (HWP) feature
-	<Active Mode With HWP_>`_ even if it is supported by the processor.
+	Do not enable the hardware-managed P-states (HWP) feature even if it is
+	supported by the processor.
 
 ``hwp_only``
 	Register ``intel_pstate`` as the scaling driver only if the
-	`hardware-managed P-states (HWP) feature <Active Mode With HWP_>`_ is
-	supported by the processor.
+	hardware-managed P-states (HWP) feature is supported by the processor.
 
 ``support_acpi_ppc``
 	Take ACPI ``_PPC`` performance limits into account.
@@ -708,7 +722,7 @@
 
 The ``ftrace`` interface can be used for low-level diagnostics of
 ``intel_pstate``.  For example, to check how often the function to set a
-P-state is called, the ``ftrace`` filter can be set to to
+P-state is called, the ``ftrace`` filter can be set to
 :c:func:`intel_pstate_set_pstate`::
 
  # cd /sys/kernel/debug/tracing/
diff --git a/Documentation/admin-guide/security-bugs.rst b/Documentation/admin-guide/security-bugs.rst
index dcd6c93..c32eb78 100644
--- a/Documentation/admin-guide/security-bugs.rst
+++ b/Documentation/admin-guide/security-bugs.rst
@@ -21,11 +21,18 @@
 
 As it is with any bug, the more information provided the easier it
 will be to diagnose and fix.  Please review the procedure outlined in
-admin-guide/reporting-bugs.rst if you are unclear about what
+:doc:`reporting-bugs` if you are unclear about what
 information is helpful.  Any exploit code is very helpful and will not
 be released without consent from the reporter unless it has already been
 made public.
 
+Please send plain text emails without attachments where possible.
+It is much harder to have a context-quoted discussion about a complex
+issue if all the details are hidden away in attachments.  Think of it like a
+:doc:`regular patch submission <../process/submitting-patches>`
+(even if you don't have a patch yet): describe the problem and impact, list
+reproduction steps, and follow it with a proposed fix, all in plain text.
+
 Disclosure and embargoed information
 ------------------------------------
 
diff --git a/Documentation/admin-guide/spkguide.txt b/Documentation/admin-guide/spkguide.txt
new file mode 100644
index 0000000..3782f6a
--- /dev/null
+++ b/Documentation/admin-guide/spkguide.txt
@@ -0,0 +1,1575 @@
+
+The Speakup User's Guide
+For Speakup 3.1.2 and Later
+By Gene Collins
+Updated by others
+Last modified on Mon Sep 27 14:26:31 2010
+Document version 1.3
+
+Copyright (c) 2005  Gene Collins
+Copyright (c) 2008  Samuel Thibault
+Copyright (c) 2009, 2010  the Speakup Team
+
+Permission is granted to copy, distribute and/or modify this document
+under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no
+Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A
+copy of the license is included in the section entitled "GNU Free
+Documentation License".
+
+Preface
+
+The purpose of this document is to familiarize users with the user
+interface to Speakup, a Linux Screen Reader.  If you need instructions
+for installing or obtaining Speakup, visit the web site at
+http://linux-speakup.org/.  Speakup is a set of patches to the standard
+Linux kernel source tree.  It can be built as a series of modules, or as
+a part of a monolithic kernel.  These details are beyond the scope of
+this manual, but the user may need to be aware of the module
+capabilities, depending on how your system administrator has installed
+Speakup.  If Speakup is built as a part of a monolithic kernel, and the
+user is using a hardware synthesizer, then Speakup will be able to
+provide speech access from the time the kernel is loaded, until the time
+the system is shutdown.  This means that if you have obtained Linux
+installation media for a distribution which includes Speakup as a part
+of its kernel, you will be able, as a blind person, to install Linux
+with speech access unaided by a sighted person.  Again, these details
+are beyond the scope of this manual, but the user should be aware of
+them.  See the web site mentioned above for further details.
+
+1.  Starting Speakup
+
+If your system administrator has installed Speakup to work with your
+specific synthesizer by default, then all you need to do to use Speakup
+is to boot your system, and Speakup should come up talking.  This
+assumes of course  that your synthesizer is a supported hardware
+synthesizer, and that it is either installed in or connected to your
+system, and is if necessary powered on.
+
+It is possible, however, that Speakup may have been compiled into the
+kernel with no default synthesizer.  It is even possible that your
+kernel has been compiled with support for some of the supported
+synthesizers and not others.  If you find that this is the case, and
+your synthesizer is supported but not available, complain to the person
+who compiled and installed your kernel.  Or better yet, go to the web
+site, and learn how to patch Speakup into your own kernel source, and
+build and install your own kernel.
+
+If your kernel has been compiled with Speakup, and has no default
+synthesizer set, or you would like to use a different synthesizer than
+the default one, then you may issue the following command at the boot
+prompt of your boot loader.
+
+linux speakup.synth=ltlk
+
+This command would tell Speakup to look for and use a LiteTalk or
+DoubleTalk LT at boot up.  You may replace the ltlk synthesizer keyword
+with the keyword for whatever synthesizer you wish to use.  The
+speakup.synth parameter will accept the following keywords, provided
+that support for the related synthesizers has been built into the
+kernel.
+
+acntsa -- Accent SA
+acntpc -- Accent PC
+apollo -- Apollo
+audptr -- Audapter
+bns -- Braille 'n Speak
+dectlk -- DecTalk Express (old and new, db9 serial only)
+decext -- DecTalk (old) External
+dtlk -- DoubleTalk PC
+keypc -- Keynote Gold PC
+ltlk -- DoubleTalk LT, LiteTalk, or external Tripletalk (db9 serial only)
+spkout -- Speak Out
+txprt -- Transport
+dummy -- Plain text terminal
+
+Note: Speakup does * NOT * support usb connections!  Speakup also does *
+NOT * support the internal Tripletalk!
+
+Speakup does support two other synthesizers, but because they work in
+conjunction with other software, they must be loaded as modules after
+their related software is loaded, and so are not available at boot up.
+These are as follows:
+
+decpc -- DecTalk PC (not available at boot up)
+soft -- One of several software synthesizers (not available at boot up)
+
+See the sections on loading modules and software synthesizers later in
+this manual for further details.  It should be noted here that the
+speakup.synth boot parameter will have no effect if Speakup has been
+compiled as modules.  In order for Speakup modules to be loaded during
+the boot process, such action must be configured by your system
+administrator.  This will mean that you will hear some, but not all,  of
+the bootup messages.
+
+2.  Basic operation
+
+Once you have booted the system, and if necessary, have supplied the
+proper bootup parameter for your synthesizer, Speakup will begin
+talking as soon as the kernel is loaded.  In fact, it will talk a lot!
+It will speak all the boot up messages that the kernel prints on the
+screen during the boot process.  This is because Speakup is not a
+separate screen reader, but is actually built into the operating
+system.  Since almost all console applications must print text on the
+screen using the kernel, and must get their keyboard input through the
+kernel, they are automatically handled properly by Speakup.  There are a
+few exceptions, but we'll come to those later.
+
+Note:  In this guide I will refer to the numeric keypad as the keypad.
+This is done because the speakupmap.map file referred to later in this
+manual uses the term keypad instead of numeric keypad.  Also I'm lazy
+and would rather only type one word.  So keypad it is.  Got it?  Good.
+
+Most of the Speakup review keys are located on the keypad at the far
+right of the keyboard.  The numlock key should be off, in order for these
+to work.  If you toggle the numlock on, the keypad will produce numbers,
+which is exactly what you want for spreadsheets and such.  For the
+purposes of this guide, you should have the numlock turned off, which is
+its default state at bootup.
+
+You probably won't want to listen to all the bootup messages every time
+you start your system, though it's a good idea to listen to them at
+least once, just so you'll know what kind of information is available to
+you during the boot process.  You can always review these messages after
+bootup with the command:
+
+dmesg | more
+
+In order to speed the boot process, and to silence the speaking of the
+bootup messages, just press the keypad enter key.  This key is located
+in the bottom right corner of the keypad.  Speakup will shut up and stay
+that way, until you press another key.
+
+You can check to see if the boot process has completed by pressing the 8
+key on the keypad, which reads the current line.  This also has the
+effect of starting Speakup talking again, so you can press keypad enter
+to silence it again if the boot process has not completed.
+
+When the boot process is complete, you will arrive at a "login" prompt.
+At this point, you'll need to type in your user id and password, as
+provided by your system administrator.  You will hear Speakup speak the
+letters of your user id as you type it, but not the password.  This is
+because the password is not displayed on the screen for security
+reasons.  This has nothing to do with Speakup, it's a Linux security
+feature.
+
+Once you've logged in, you can run any Linux command or program which is
+allowed by your user id.  Normal users will not be able to run programs
+which require root privileges.
+
+When you are running a program or command, Speakup will automatically
+speak new text as it arrives on the screen.  You can at any time silence
+the speech with keypad enter, or use any of the Speakup review keys.
+
+Here are some basic Speakup review keys, and a short description of what
+they do.
+
+keypad 1 -- read previous character
+keypad 2 -- read current character (pressing keypad 2 twice rapidly will speak
+	the current character phonetically)
+keypad 3 -- read next character
+keypad 4 -- read previous word
+keypad 5 -- read current word (press twice rapidly to spell the current word)
+keypad 6 -- read next word
+keypad 7 -- read previous line
+keypad 8 -- read current line (press twice rapidly to hear how much the
+	text on the current line is indented)
+keypad 9 -- read next line
+keypad period -- speak current cursor position and announce current
+	virtual console
+
+It's also worth noting that the insert key on the keypad is mapped
+as the speakup key.  Instead of pressing and releasing this key, as you
+do under DOS or Windows, you hold it like a shift key, and press other
+keys in combination with it.  For example, repeatedly holding keypad
+insert, from now on called speakup, and keypad enter will toggle the
+speaking of new text on the screen on and off.  This is not the same as
+just pressing keypad enter by itself, which just silences the speech
+until you hit another key.  When you hit speakup plus keypad enter,
+Speakup will say, "You turned me off.", or "Hey, that's better."  When
+Speakup is turned off, no new text on the screen will be spoken.  You
+can still use the reading controls to review the screen however.
+
+3.  Using the Speakup Help System
+
+In order to enter the Speakup help system, press and hold the speakup
+key (remember that this is the keypad insert key), and press the f1 key.
+You will hear the message:
+
+"Press space to leave help, cursor up or down to scroll, or a letter to
+go to commands in list."
+
+When you press the spacebar to leave the help system, you will hear:
+
+"Leaving help."
+
+While you are in the Speakup help system, you can scroll up or down
+through the list of available commands using the cursor keys.  The list
+of commands is arranged in alphabetical order.  If you wish to jump to
+commands in a specific part of the alphabet, you may press the letter of
+the alphabet you wish to jump to.
+
+You can also just explore by typing keyboard keys.  Pressing keys will
+cause Speakup to speak the command associated with that key.  For
+example, if you press the keypad 8 key, you will hear:
+
+"Keypad 8 is line, say current."
+
+You'll notice that some commands do not have keys assigned to them.
+This is because they are very infrequently used commands, and are also
+accessible through the sys system.  We'll discuss the sys system later
+in this manual.
+
+You'll also notice that some commands have two keys assigned to them.
+This is because Speakup has a built in set of alternative key bindings
+for laptop users.  The alternate speakup key is the caps lock key.  You
+can press and hold the caps lock key, while pressing an alternate
+speakup command key to activate the command.  On most laptops, the
+numeric keypad is defined as the keys in the j k l area of the keyboard.
+
+There is usually a function key which turns this keypad function on and
+off, and some other key which controls the numlock state.  Toggling the
+keypad functionality on and off can become a royal pain.  So, Speakup
+gives you a simple way to get at an alternative set of key mappings for
+your laptop.  These are also available by default on desktop systems,
+because Speakup does not know whether it is running on a desktop or
+laptop.  So you may choose which set of Speakup keys to use.  Some
+system administrators may have chosen to compile Speakup for a desktop
+system without this set of alternate key bindings, but these details are
+beyond the scope of this manual.  To use the caps lock for its normal
+purpose, hold the shift key while toggling the caps lock on and off.  We
+should note here, that holding the caps lock key and pressing the z key
+will toggle the alternate j k l keypad on and off.
+
+4.  Keys and Their Assigned Commands
+
+In this section, we'll go through a list of all the speakup keys and
+commands.  You can also get a list of commands and assigned keys from
+the help system.
+
+The following list was taken from the speakupmap.map file.  Key
+assignments are on the left of the equal sign, and the associated
+Speakup commands are on the right.  The designation "spk" means to press
+and hold the speakup key, a.k.a. keypad insert, a.k.a. caps lock, while
+pressing the other specified key.
+
+spk key_f9 = punc_level_dec
+spk key_f10 = punc_level_inc
+spk key_f11 = reading_punc_dec
+spk key_f12 = reading_punc_inc
+spk key_1 = vol_dec
+spk key_2 =  vol_inc
+spk key_3 = pitch_dec
+spk key_4 = pitch_inc
+spk key_5 = rate_dec
+spk key_6 = rate_inc
+key_kpasterisk = toggle_cursoring
+spk key_kpasterisk = speakup_goto
+spk key_f1 = speakup_help
+spk key_f2 = set_win
+spk key_f3 = clear_win
+spk key_f4 = enable_win
+spk key_f5 = edit_some
+spk key_f6 = edit_most
+spk key_f7 = edit_delim
+spk key_f8 = edit_repeat
+shift spk key_f9 = edit_exnum
+ key_kp7 = say_prev_line
+spk key_kp7 = left_edge
+ key_kp8 = say_line
+double  key_kp8 = say_line_indent
+spk key_kp8 = say_from_top
+ key_kp9 = say_next_line
+spk  key_kp9 = top_edge
+ key_kpminus = speakup_parked
+spk key_kpminus = say_char_num
+ key_kp4 = say_prev_word
+spk key_kp4 = say_from_left
+ key_kp5 = say_word
+double key_kp5 = spell_word
+spk key_kp5 = spell_phonetic
+ key_kp6 = say_next_word
+spk key_kp6 = say_to_right
+ key_kpplus = say_screen
+spk key_kpplus = say_win
+ key_kp1 = say_prev_char
+spk key_kp1 = right_edge
+ key_kp2 = say_char
+spk key_kp2 = say_to_bottom
+double key_kp2 = say_phonetic_char
+ key_kp3 = say_next_char
+spk  key_kp3 = bottom_edge
+ key_kp0 = spk_key
+ key_kpdot = say_position
+spk key_kpdot = say_attributes
+key_kpenter = speakup_quiet
+spk key_kpenter = speakup_off
+key_sysrq = speech_kill
+ key_kpslash = speakup_cut
+spk key_kpslash = speakup_paste
+spk key_pageup = say_first_char
+spk key_pagedown = say_last_char
+key_capslock = spk_key
+ spk key_z = spk_lock
+key_leftmeta = spk_key
+ctrl spk key_0 = speakup_goto
+spk key_u = say_prev_line
+spk key_i = say_line
+double spk key_i = say_line_indent
+spk key_o = say_next_line
+spk key_minus = speakup_parked
+shift spk key_minus = say_char_num
+spk key_j = say_prev_word
+spk key_k = say_word
+double spk key_k = spell_word
+spk key_l = say_next_word
+spk key_m = say_prev_char
+spk key_comma = say_char
+double spk key_comma = say_phonetic_char
+spk key_dot = say_next_char
+spk key_n = say_position
+ ctrl spk key_m = left_edge
+ ctrl spk key_y = top_edge
+ ctrl spk key_dot = right_edge
+ctrl spk key_p = bottom_edge
+spk key_apostrophe = say_screen
+spk key_h = say_from_left
+spk key_y = say_from_top
+spk key_semicolon = say_to_right
+spk key_p = say_to_bottom
+spk key_slash = say_attributes
+ spk key_enter = speakup_quiet
+ ctrl  spk key_enter = speakup_off
+ spk key_9 = speakup_cut
+spk key_8 = speakup_paste
+shift spk key_m = say_first_char
+ ctrl spk key_semicolon = say_last_char
+
+5.  The Speakup Sys System
+
+The Speakup screen reader also creates a speakup subdirectory as a part
+of the sys system.
+
+As a convenience, run as root
+
+ln -s /sys/accessibility/speakup /speakup
+
+to directly access speakup parameters from /speakup.
+You can see these entries by typing the command:
+
+ls -1 /speakup/*
+
+If you issue the above ls command, you will get back something like
+this:
+
+/speakup/attrib_bleep
+/speakup/bell_pos
+/speakup/bleep_time
+/speakup/bleeps
+/speakup/cursor_time
+/speakup/delimiters
+/speakup/ex_num
+/speakup/key_echo
+/speakup/keymap
+/speakup/no_interrupt
+/speakup/punc_all
+/speakup/punc_level
+/speakup/punc_most
+/speakup/punc_some
+/speakup/reading_punc
+/speakup/repeats
+/speakup/say_control
+/speakup/say_word_ctl
+/speakup/silent
+/speakup/spell_delay
+/speakup/synth
+/speakup/synth_direct
+/speakup/version
+
+/speakup/i18n:
+announcements
+characters
+chartab
+colors
+ctl_keys
+formatted
+function_names
+key_names
+states
+
+/speakup/soft:
+caps_start
+caps_stop
+delay_time
+direct
+freq
+full_time
+jiffy_delta
+pitch
+inflection
+punct
+rate
+tone
+trigger_time
+voice
+vol
+
+Notice the two subdirectories of /speakup: /speakup/i18n and
+/speakup/soft.
+The i18n subdirectory is described in a later section.
+The files under /speakup/soft represent settings that are specific to the
+driver for the software synthesizer.  If you use the LiteTalk, your
+synthesizer-specific settings would be found in /speakup/ltlk.  In other words,
+a subdirectory named /speakup/KWD is created to hold parameters specific
+to the device whose keyword is KWD.
+These parameters include volume, rate, pitch, and others.
+
+In addition to using the Speakup hot keys to change such things as
+volume, pitch, and rate, you can also echo values to the appropriate
+entry in the /speakup directory.  This is very useful, since it
+lets you control Speakup parameters from within a script.  How you
+would write such scripts is somewhat beyond the scope of this manual,
+but I will include a couple of simple examples here to give you a
+general idea of what such scripts can do.
+
+Suppose for example, that you wanted to control both the punctuation
+level and the reading punctuation level at the same time.  For
+simplicity, we'll call them punc0, punc1, punc2, and punc3.  The scripts
+might look something like this:
+
+#!/bin/bash
+# punc0
+# set punc and reading punc levels to 0
+echo 0 >/speakup/punc_level
+echo 0 >/speakup/reading_punc
+echo Punctuation level set to 0.
+
+#!/bin/bash
+# punc1
+# set punc and reading punc levels to 1
+echo 1 >/speakup/punc_level
+echo 1 >/speakup/reading_punc
+echo Punctuation level set to 1.
+
+#!/bin/bash
+# punc2
+# set punc and reading punc levels to 2
+echo 2 >/speakup/punc_level
+echo 2 >/speakup/reading_punc
+echo Punctuation level set to 2.
+
+#!/bin/bash
+# punc3
+# set punc and reading punc levels to 3
+echo 3 >/speakup/punc_level
+echo 3 >/speakup/reading_punc
+echo Punctuation level set to 3.
+
+If you were to store these four small scripts in a directory in your
+path, perhaps /usr/local/bin, and set the permissions to 755 with the
+chmod command, then you could change the default reading punc and
+punctuation levels at the same time by issuing just one command.  For
+example, if you were to execute the punc3 command at your shell prompt,
+then the reading punc and punc level would both get set to 3.
+
+I should note that the above scripts were written to work with bash, but
+regardless of which shell you use, you should be able to do something
+similar.
+
+The Speakup sys system also has another interesting use.  You can echo
+Speakup parameters into the sys system in a script during system
+startup, and speakup will return to your preferred parameters every time
+the system is rebooted.
+
+Most of the Speakup sys parameters can be manipulated by a regular user
+on the system.  However, there are a few parameters that are dangerous
+enough that they should only be manipulated by the root user on your
+system.  There are even some parameters that are read only, and cannot
+be written to at all.  For example, the version entry in the Speakup
+sys system is read only.  This is because there is no reason for a user
+to tamper with the version number which is reported by Speakup.  Doing
+an ls -l on /speakup/version will return this:
+
+-r--r--r--    1 root     root            0 Mar 21 13:46 /speakup/version
+
+As you can see, the version entry in the Speakup sys system is read
+only, is owned by root, and belongs to the root group.  Doing a cat of
+/speakup/version will display the Speakup version number, like
+this:
+
+cat /speakup/version
+Speakup v-2.00 CVS: Thu Oct 21 10:38:21 EDT 2004
+synth dtlk version 1.1
+
+The display shows the Speakup version number, along with the version
+number of the driver for the current synthesizer.
+
+Looking at entries in the Speakup sys system can be useful in many
+ways.  For example, you might wish to know what level your volume is set
+at.  You could type:
+
+cat /speakup/KWD/vol
+# Replace KWD with the keyword for your synthesizer, E.G., ltlk for LiteTalk.
+5
+
+The number five which comes back is the level at which the synthesizer
+volume is set at.
+
+All the entries in the Speakup sys system are readable, some are
+writable by root only, and some are writable by everyone.  Unless you
+know what you are doing, you should probably leave the ones that are
+writable by root only alone.  Most of the names are self explanatory.
+Vol for controlling volume, pitch for pitch, inflection for pitch range, rate
+for controlling speaking rate, etc.  If you find one you aren't sure about, you
+can post a query on the Speakup list.
+
+6.  Changing Synthesizers
+
+It is possible to change to a different synthesizer while speakup is
+running.  In other words, it is not necessary to reboot the system
+in order to use a different synthesizer.  You can simply echo the
+synthesizer keyword to the /speakup/synth sys entry.
+Depending on your situation, you may wish to echo none to the synth
+sys entry, to disable speech while one synthesizer is disconnected and
+a second one is connected in its place.  Then echo the keyword for the
+new synthesizer into the synth sys entry in order to start speech
+with the newly connected synthesizer.  See the list of synthesizer
+keywords in section 1 to find the keyword which matches your synth.
+
+7.  Loading modules
+
+As mentioned earlier, Speakup can either be completely compiled into the
+kernel, with the exception of the help module, or it can be compiled as
+a series of modules.   When compiled as modules, Speakup will only be
+able to speak some of the bootup messages if your system administrator
+has configured the system to load the modules at boo time. The modules
+can  be loaded after the file systems have been checked and mounted, or
+from an initrd.  There is a third possibility.  Speakup can be compiled
+with some components built into the kernel, and others as modules.  As
+we'll see in the next section, this is particularly useful when you are
+working with software synthesizers.
+
+If Speakup is completely compiled as modules, then you must use the
+modprobe command to load Speakup.  You do this by loading the module for
+the synthesizer driver you wish to use.  The driver modules are all
+named speakup_<keyword>, where <keyword> is the keyword for the
+synthesizer you want.  So, in order to load the driver for the DecTalk
+Express, you would type the following command:
+
+modprobe speakup_dectlk
+
+Issuing this command would load the DecTalk Express driver and all other
+related Speakup modules necessary to get Speakup up and running.
+
+To completely unload Speakup, again presuming that it is entirely built
+as modules, you would give the command:
+
+modprobe -r speakup_dectlk
+
+The above command assumes you were running a DecTalk Express.  If you
+were using a different synth, then you would substitute its keyword in
+place of dectlk.
+
+If you have multiple drivers loaded, you need to unload all of them, in
+order to completely unload Speakup.
+For example, if you have loaded both the dectlk and ltlk drivers, use the
+command:
+modprobe -r speakup_dectlk speakup_ltlk
+
+You cannot unload the driver for software synthesizers when a user-space
+daemon is using /dev/softsynth.  First, kill the daemon.  Next, remove
+the driver with the command:
+modprobe -r speakup_soft
+
+Now, suppose we have a situation where the main Speakup component
+is built into the kernel, and some or all of the drivers are built as
+modules.  Since the main part of Speakup is compiled into the kernel, a
+partial Speakup sys system has been created which we can take advantage
+of by simply echoing the synthesizer keyword into the
+/speakup/synth sys entry.  This will cause the kernel to
+automatically load the appropriate driver module, and start Speakup
+talking.  To switch to another synth, just echo a new keyword to the
+synth sys entry.  For example, to load the DoubleTalk LT driver,
+you would type:
+
+echo ltlk >/speakup/synth
+
+You can use the modprobe -r command to unload driver modules, regardless
+of whether the main part of Speakup has been built into the kernel or
+not.
+
+8.  Using Software Synthesizers
+
+Using a software synthesizer requires that some other software be
+installed and running on your system.  For this reason, software
+synthesizers are not available for use at bootup, or during a system
+installation process.
+There are two freely-available solutions for software speech: Espeakup and
+Speech Dispatcher.
+These are described in subsections 8.1 and 8.2, respectively.
+
+During the rest of this section, we assume that speakup_soft is either
+built in to your kernel, or loaded as a module.
+
+If your system does not have udev installed , before you can use a
+software synthesizer, you must have created the /dev/softsynth device.
+If you have not already done so, issue the following commands as root:
+
+cd /dev
+mknod softsynth c 10 26
+
+While we are at it, we might just as well create the /dev/synth device,
+which can be used to let user space programs send information to your
+synthesizer.  To create /dev/synth, change to the /dev directory, and
+issue the following command as root:
+
+mknod synth c 10 25
+
+of both.
+
+8.1. Espeakup
+
+Espeakup is a connector between Speakup and the eSpeak software synthesizer.
+Espeakup may already be available as a package for your distribution
+of Linux.  If it is not packaged, you need to install it manually.
+You can find it in the contrib/ subdirectory of the Speakup sources.
+The filename is espeakup-$VERSION.tar.bz2, where $VERSION
+depends on the current release of Espeakup.  The Speakup 3.1.2 source
+ships with version 0.71 of Espeakup.
+The README file included with the Espeakup sources describes the process
+of manual installation.
+
+Assuming that Espeakup is installed, either by the user or by the distributor,
+follow these steps to use it.
+
+Tell Speakup to use the "soft driver:
+echo soft > /speakup/synth
+
+Finally, start the espeakup program.  There are two ways to do it.
+Both require root privileges.
+
+If Espeakup was installed as a package for your Linux distribution,
+you probably have a distribution-specific script that controls the operation
+of the daemon.  Look for a file named espeakup under /etc/init.d or
+/etc/rc.d.  Execute the following command with root privileges:
+/etc/init.d/espeakup start
+Replace init.d with rc.d, if your distribution uses scripts located under
+/etc/rc.d.
+Your distribution will also have a procedure for starting daemons at
+boot-time, so it is possible to have software speech as soon as user-space
+daemons are started by the bootup scripts.
+These procedures are not described in this document.
+
+If you built Espeakup manually, the "make install" step placed the binary
+under /usr/bin.
+Run the following command as root:
+/usr/bin/espeakup
+Espeakup should start speaking.
+
+8.2. Speech Dispatcher
+
+For this option, you must have a package called
+Speech Dispatcher running on your system, and it must be configured to
+work with one of its supported software synthesizers.
+
+Two open source synthesizers you might use are Flite and Festival.  You
+might also choose to purchase the Software DecTalk from Fonix Sales Inc.
+If you run a google search for Fonix, you'll find their web site.
+
+You can obtain a copy of Speech Dispatcher from free(b)soft at
+http://www.freebsoft.org/.  Follow the installation instructions that
+come with Speech Dispatcher in order to install and configure Speech
+Dispatcher.  You can check out the web site for your Linux distribution
+in order to get a copy of either Flite or Festival.  Your Linux
+distribution may also have a precompiled Speech Dispatcher package.
+
+Once you've installed, configured, and tested Speech Dispatcher with your
+chosen software synthesizer, you still need one more piece of software
+in order to make things work.  You need a package called speechd-up.
+You get it from the free(b)soft web site mentioned above.  After you've
+compiled and installed speechd-up, you are almost ready to begin using
+your software synthesizer.
+
+Now you can begin using your software synthesizer.  In order to do so,
+echo the soft keyword to the synth sys entry like this:
+
+echo soft >/speakup/synth
+
+Next run the speechd_up command like this:
+
+speechd_up &
+
+Your synth should now start talking, and you should be able to adjust
+the pitch, rate, etc.
+
+9.  Using The DecTalk PC Card
+
+The DecTalk PC card is an ISA card that is inserted into one of the ISA
+slots in your computer.  It requires that the DecTalk PC software be
+installed on your computer, and that the software be loaded onto the
+Dectalk PC card before it can be used.
+
+You can get the dec_pc.tgz file from the linux-speakup.org site.  The
+dec_pc.tgz file is in the ~ftp/pub/linux/speakup directory.
+
+After you have downloaded the dec_pc.tgz file, untar it in your home
+directory, and read the Readme file in the newly created dec_pc
+directory.
+
+The easiest way to get the software working is to copy the entire dec_pc
+directory into /user/local/lib.  To do this, su to root in your home
+directory, and issue the command:
+
+cp dec_pc /usr/local/lib
+
+You will need to copy the dtload command from the dec_pc directory to a
+directory in your path.  Either /usr/bin or /usr/local/bin is a good
+choice.
+
+You can now run the dtload command in order to load the DecTalk PC
+software onto the card.  After you have done this, echo the decpc
+keyword to the synth entry in the sys system like this:
+
+echo decpc >/speakup/synth
+
+Your DecTalk PC should start talking, and then you can adjust the pitch,
+rate, volume, voice, etc.  The voice entry in the Speakup sys system
+will accept a number from 0 through 7 for the DecTalk PC synthesizer,
+which will give you access to some of the DecTalk voices.
+
+10.  Using Cursor Tracking
+
+In Speakup version 2.0 and later, cursor tracking is turned on by
+default.  This means that when you are using an editor, Speakup will
+automatically speak characters as you move left and right with the
+cursor keys, and lines as you move up and down with the cursor keys.
+This is the traditional sort of cursor tracking.
+Recent versions of Speakup provide two additional ways to control the
+text that is spoken when the cursor is moved:
+"highlight tracking" and "read window."
+They are described later in this section.
+Sometimes, these modes get in your way, so you can disable cursor tracking
+altogether.
+
+You may select among the various forms of cursor tracking using the keypad
+asterisk key.
+Each time you press this key, a new mode is selected, and Speakup speaks
+the name of the new mode.  The names for the four possible states of cursor
+tracking are: "cursoring on", "highlight tracking", "read window",
+and "cursoring off."  The keypad asterisk key moves through the list of
+modes in a circular fashion.
+
+If highlight tracking is enabled, Speakup tracks highlighted text,
+rather than the cursor itself. When you move the cursor with the arrow keys,
+Speakup speaks the currently highlighted information.
+This is useful when moving through various menus and dialog boxes.
+If cursor tracking isn't helping you while navigating a menu,
+try highlight tracking.
+
+With the "read window" variety of cursor tracking, you can limit the text
+that Speakup speaks by specifying a window of interest on the screen.
+See section 15 for a description of the process of defining windows.
+When you move the cursor via the arrow keys, Speakup only speaks
+the contents of the window.  This is especially helpful when you are hearing
+superfluous speech.  Consider the following example.
+
+Suppose that you are at a shell prompt.  You use bash, and you want to
+explore your command history using the up and down arrow keys.  If you
+have enabled cursor tracking, you will hear two pieces of information.
+Speakup speaks both your shell prompt and the current entry from the
+command history.  You may not want to hear the prompt repeated
+each time you move, so you can silence it by specifying a window.  Find
+the last line of text on the screen.  Clear the current window by pressing
+the key combination speakup f3.  Use the review cursor to find the first
+character that follows your shell prompt.  Press speakup + f2 twice, to
+define a one-line window.  The boundaries of the window are the
+character following the shell prompt and the end of the line.  Now, cycle
+through the cursor tracking modes using keypad asterisk, until Speakup
+says "read window."  Move through your history using your arrow keys.
+You will notice that Speakup no longer speaks the redundant prompt.
+
+Some folks like to turn cursor tracking off while they are using the
+lynx web browser.  You definitely want to turn cursor tracking off when
+you are using the alsamixer application.  Otherwise, you won't be able
+to hear your mixer settings while you are using the arrow keys.
+
+11.  Cut and Paste
+
+One of Speakup's more useful features is the ability to cut and paste
+text on the screen.  This means that you can capture information from a
+program, and paste that captured text into a different place in the
+program, or into an entirely different program, which may even be
+running on a different console.
+
+For example, in this manual, we have made references to several web
+sites.  It would be nice if you could cut and paste these urls into your
+web browser.  Speakup does this quite nicely.  Suppose you wanted to
+past the following url into your browser:
+
+http://linux-speakup.org/
+
+Use the speakup review keys to position the reading cursor on the first
+character of the above url.  When the reading cursor is in position,
+press the keypad slash key once.  Speakup will say, "mark".  Next,
+position the reading cursor on the rightmost character of the above
+url. Press the keypad slash key once again to actually cut the text
+from the screen.  Speakup will say, "cut".  Although we call this
+cutting, Speakup does not actually delete the cut text from the screen.
+It makes a copy of the text in a special buffer for later pasting.
+
+Now that you have the url cut from the screen, you can paste it into
+your browser, or even paste the url on a command line as an argument to
+your browser.
+
+Suppose you want to start lynx and go to the Speakup site.
+
+You can switch to a different console with the alt left and right
+arrows, or you can switch to a specific console by typing alt and a
+function key.  These are not Speakup commands, just standard Linux
+console capabilities.
+
+Once you've changed to an appropriate console, and are at a shell prompt,
+type the word lynx, followed by a space.  Now press and hold the speakup
+key, while you type the keypad slash character.  The url will be pasted
+onto the command line, just as though you had typed it in.  Press the
+enter key to execute the command.
+
+The paste buffer will continue to hold the cut information, until a new
+mark and cut operation is carried out.  This means you can paste the cut
+information as many times as you like before doing another cut
+operation.
+
+You are not limited to cutting and pasting only one line on the screen.
+You can also cut and paste rectangular regions of the screen.  Just
+position the reading cursor at the top left corner of the text to be
+cut, mark it with the keypad slash key, then position the reading cursor
+at the bottom right corner of the region to be cut, and cut it with the
+keypad slash key.
+
+12.  Changing the Pronunciation of Characters
+
+Through the /speakup/i18n/characters sys entry, Speakup gives you the
+ability to change how Speakup pronounces a given character.  You could,
+for example, change how some punctuation characters are spoken.  You can
+even change how Speakup will pronounce certain letters.
+
+You may, for example, wish to change how Speakup pronounces the z
+character.  The author of Speakup, Kirk Reiser, is Canadian, and thus
+believes that the z should be pronounced zed.  If you are an American,
+you might wish to use the zee pronunciation instead of zed.  You can
+change the pronunciation of both the upper and lower case z with the
+following two commands:
+
+echo 90 zee >/speakup/characters
+echo 122 zee >/speakup/characters
+
+Let's examine the parts of the two previous commands.  They are issued
+at the shell prompt, and could be placed in a startup script.
+
+The word echo tells the shell that you want to have it display the
+string of characters that follow the word echo.  If you were to just
+type:
+
+echo hello.
+
+You would get the word hello printed on your screen as soon as you
+pressed the enter key.  In this case, we are echoing strings that we
+want to be redirected into the sys system.
+
+The numbers 90 and 122 in the above echo commands are the ascii numeric
+values for the upper and lower case z, the characters we wish to change.
+
+The string zee is the pronunciation that we want Speakup to use for the
+upper and lower case z.
+
+The > symbol redirects the output of the echo command to a file, just
+like in DOS, or at the Windows command prompt.
+
+And finally, /speakup/i18n/characters is the file entry in the sys system
+where we want the output to be directed.  Speakup looks at the numeric
+value of the character we want to change, and inserts the pronunciation
+string into an internal table.
+
+You can look at the whole table with the following command:
+
+cat /speakup/i18n/characters
+
+Speakup will then print out the entire character pronunciation table.  I
+won't display it here, but leave you to look at it at your convenience.
+
+13.  Mapping Keys
+
+Speakup has the capability of allowing you to assign or "map" keys to
+internal Speakup commands.  This section necessarily assumes you have a
+Linux kernel source tree installed, and that it has been patched and
+configured with Speakup.  How you do this is beyond the scope of this
+manual.  For this information, visit the Speakup web site at
+http://linux-speakup.org/.  The reason you'll need the kernel source
+tree patched with Speakup is that the genmap utility you'll need for
+processing keymaps is in the
+/usr/src/linux-<version_number>/drivers/char/speakup directory.  The
+<version_number> in the above directory path is the version number of
+the Linux source tree you are working with.
+
+So ok, you've gone off and gotten your kernel source tree, and patched
+and configured it.  Now you can start manipulating keymaps.
+
+You can either use the
+/usr/src/linux-<version_number>/drivers/char/speakup/speakupmap.map file
+included with the Speakup source, or you can cut and paste the copy in
+section 4 into a separate file.  If you use the one in the Speakup
+source tree, make sure you make a backup of it before you start making
+changes.  You have been warned!
+
+Suppose that you want to swap the key assignments for the Speakup
+say_last_char and the Speakup say_first_char commands.  The
+speakupmap.map lists the key mappings for these two commands as follows:
+
+spk key_pageup = say_first_char
+spk key_pagedown = say_last_char
+
+You can edit your copy of the speakupmap.map file and swap the command
+names on the right side of the = (equals) sign.  You did make a backup,
+right?  The new keymap lines would look like this:
+
+spk key_pageup = say_last_char
+spk key_pagedown = say_first_char
+
+After you edit your copy of the speakupmap.map file, save it under a new
+file name, perhaps newmap.map.  Then exit your editor and return to the
+shell prompt.
+
+You are now ready to load your keymap with your swapped key assignments.
+ Assuming that you saved your new keymap as the file newmap.map, you
+would load your keymap into the sys system like this:
+
+/usr/src/linux-<version_number>/drivers/char/speakup/genmap newmap.map
+>/speakup/keymap
+
+Remember to substitute your kernel version number for the
+<version_number> in the above command.  Also note that although the
+above command wrapped onto two lines in this document, you should type
+it all on one line.
+
+Your say first and say last characters should now be swapped.  Pressing
+speakup pagedown should read you the first non-whitespace character on
+the line your reading cursor is in, and pressing speakup pageup should
+read you the last character on the line your reading cursor is in.
+
+You should note that these new mappings will only stay in effect until
+you reboot, or until you load another keymap.
+
+One final warning.  If you try to load a partial map, you will quickly
+find that all the mappings you didn't include in your file got deleted
+from the working map.  Be extremely careful, and always make a backup!
+You have been warned!
+
+14.  Internationalizing Speakup
+
+Speakup indicates various conditions to the user by speaking messages.
+For instance, when you move to the left edge of the screen with the
+review keys, Speakup says, "left."
+Prior to version 3.1.0 of Speakup, all of these messages were in English,
+and they could not be changed.  If you used a non-English synthesizer,
+you still heard English messages, such as "left" and "cursoring on."
+In version 3.1.0 or higher, one may load translations for the various
+messages via the /sys filesystem.
+
+The directory /speakup/i18n contains several collections of messages.
+Each group of messages is stored in its own file.
+The following section lists all of these files, along with a brief description
+of each.
+
+14.1.  Files Under the i18n Subdirectory
+
+* announcements:
+This file contains various general announcements, most of which cannot
+be categorized.  You will find messages such as "You killed Speakup",
+"I'm alive", "leaving help", "parked", "unparked", and others.
+You will also find the names of the screen edges and cursor tracking modes
+here.
+
+* characters:
+See section 12 for a description of this file.
+
+* chartab:
+See section 12.  Unlike the rest of the files in the i18n subdirectory,
+this one does not contain messages to be spoken.
+
+* colors:
+When you use the "say attributes" function, Speakup says the name of the
+foreground and background colors.  These names come from the i18n/colors
+file.
+
+* ctl_keys:
+Here, you will find names of control keys.  These are used with Speakup's
+say_control feature.
+
+* formatted:
+This group of messages contains embedded formatting codes, to specify
+the type and width of displayed data.  If you change these, you must
+preserve all of the formatting codes, and they must appear in the order
+used by the default messages.
+
+* function_names:
+Here, you will find a list of names for Speakup functions.  These are used
+by the help system.  For example, suppose that you have activated help mode,
+and you pressed keypad 3.  Speakup says:
+"keypad 3 is character, say next."
+The message "character, say next" names a Speakup function, and it
+comes from this function_names file.
+
+* key_names:
+Again, key_names is used by Speakup's help system.  In the previous
+example, Speakup said that you pressed "keypad 3."
+This name came from the key_names file.
+
+* states:
+This file contains names for key states.
+Again, these are part of the help system.  For instance, if you had pressed
+speakup + keypad 3, you would hear:
+"speakup keypad 3 is go to bottom edge."
+The speakup key is depressed, so the name of the key state is speakup.
+This part of the message comes from the states collection.
+
+14.2.  Loading Your Own Messages
+
+The files under the i18n subdirectory all follow the same format.
+They consist of lines, with one message per line.
+Each message is represented by a number, followed by the text of the message.
+The number is the position of the message in the given collection.
+For example, if you view the file /speakup/i18n/colors, you will see the
+following list:
+
+0	black
+1	blue
+2	green
+3	cyan
+4	red
+5	magenta
+6	yellow
+7	white
+8	grey
+
+You can change one message, or you can change a whole group.
+To load a whole collection of messages from a new source, simply use
+the cp command:
+cp ~/my_colors /speakup/i18n/colors
+You can change an individual message with the echo command,
+as shown in the following example.
+
+The Spanish name for the color blue is azul.
+Looking at the colors file, we see that the name "blue" is at position 1
+within the colors group.  Let's change blue to azul:
+echo '1 azul' > /speakup/i18n/colors
+The next time that Speakup says message 1 from the colors group, it will
+say "azul", rather than "blue."
+
+In the future, translations into various languages will be made available,
+and most users will just load the files necessary for their language.
+
+14.3.  No Support for Non-Western-European Languages
+
+As of the current release, Speakup only supports Western European languages.
+Support for the extended characters used by languages outside of the Western
+European family of languages is a work in progress.
+
+15.  Using Speakup's Windowing Capability
+
+Speakup has the capability of defining and manipulating windows on the
+screen.  Speakup uses the term "Window", to mean a user defined area of
+the screen.  The key strokes for defining and manipulating Speakup
+windows are as follows:
+
+speakup + f2 -- Set the bounds of the window.
+Speakup + f3 -- clear the current window definition.
+speakup + f4 -- Toggle window silence on and off.
+speakup + keypad plus -- Say the currently defined window.
+
+These capabilities are useful for tracking a certain part of the screen
+without rereading the whole screen, or for silencing a part of the
+screen that is constantly changing, such as a clock or status line.
+
+There is no way to save these window settings, and you can only have one
+window defined for each virtual console.  There is also no way to have
+windows automatically defined for specific applications.
+
+In order to define a window, use the review keys to move your reading
+cursor to the beginning of the area you want to define.  Then press
+speakup + f2.  Speakup will tell you that the window starts at the
+indicated row and column position.  Then move the reading cursor to the
+end of the area to be defined as a window, and press speakup + f2 again.
+ If there is more than one line in the window, Speakup will tell you
+that the window ends at the indicated row and column position.  If there
+is only one line in the window, then Speakup will tell you that the
+window is the specified line on the screen.  If you are only defining a
+one line window, you can just press speakup + f2 twice after placing the
+reading cursor on the line you want to define as a window.  It is not
+necessary to position the reading cursor at the end of the line in order
+to define the whole line as a window.
+
+16.  Tools for Controlling Speakup
+
+The speakup distribution includes extra tools (in the tools directory)
+which were written to make speakup easier to use.  This section will
+briefly describe the use of these tools.
+
+16.1.  Speakupconf
+
+speakupconf began life as a contribution from Steve Holmes, a member of
+the speakup community.  We would like to thank him for his work on the
+early versions of this project.
+
+This script may be installed as part of your linux distribution, but if
+it isn't, the recommended places to put it are /usr/local/bin or
+/usr/bin.  This script can be run by any user, so it does not require
+root privileges.
+
+Speakupconf allows you to save and load your Speakup settings.  It works
+by reading and writing the /sys files described above.
+
+The directory that speakupconf uses to store your settings depends on
+whether it is run from the root account.  If you execute speakupconf as
+root, it uses the directory /etc/speakup.  Otherwise, it uses the directory
+~/.speakup, where ~ is your home directory.
+Anyone who needs to use Speakup from your console can load his own custom
+settings with this script.
+
+speakupconf takes one required argument: load or save.
+Use the command
+speakupconf save
+to save your Speakup settings, and
+speakupconf load
+to load them into Speakup.
+A second argument may be specified to use an alternate directory to
+load or save the speakup parameters.
+
+16.2.  Talkwith
+
+Charles Hallenbeck, another member of the speakup community, wrote the
+initial versions of this script, and we would also like to thank him for
+his work on it.
+
+This script needs root privileges to run, so if it is not installed as
+part of your linux distribution, the recommended places to install it
+are /usr/local/sbin or /usr/sbin.
+
+Talkwith allows you to switch synthesizers on the fly.  It takes a synthesizer
+name as an argument.  For instance,
+talkwith dectlk
+causes Speakup to use the DecTalk Express.  If you wish to switch to a
+software synthesizer, you must also indicate which daemon you wish to
+use.  There are two possible choices:
+spd and espeakup.  spd is an abbreviation for speechd-up.
+If you wish to use espeakup for software synthesis, give the command
+talkwith soft espeakup
+To use speechd-up, type:
+talkwith soft spd
+Any arguments that follow the name of the daemon are passed to the daemon
+when it is invoked.  For instance:
+talkwith espeakup --default-voice=fr
+causes espeakup to use the French voice.
+Note that talkwith must always be executed with root privileges.
+
+Talkwith does not attempt to load your settings after the new
+synthesizer is activated.  You can use speakupconf to load your settings
+if desired.
+
+                GNU Free Documentation License
+                  Version 1.2, November 2002
+
+
+ Copyright (C) 2000,2001,2002  Free Software Foundation, Inc.
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+
+0. PREAMBLE
+
+The purpose of this License is to make a manual, textbook, or other
+functional and useful document "free" in the sense of freedom: to
+assure everyone the effective freedom to copy and redistribute it,
+with or without modifying it, either commercially or noncommercially.
+Secondarily, this License preserves for the author and publisher a way
+to get credit for their work, while not being considered responsible
+for modifications made by others.
+
+This License is a kind of "copyleft", which means that derivative
+works of the document must themselves be free in the same sense.  It
+complements the GNU General Public License, which is a copyleft
+license designed for free software.
+
+We have designed this License in order to use it for manuals for free
+software, because free software needs free documentation: a free
+program should come with manuals providing the same freedoms that the
+software does.  But this License is not limited to software manuals;
+it can be used for any textual work, regardless of subject matter or
+whether it is published as a printed book.  We recommend this License
+principally for works whose purpose is instruction or reference.
+
+
+1. APPLICABILITY AND DEFINITIONS
+
+This License applies to any manual or other work, in any medium, that
+contains a notice placed by the copyright holder saying it can be
+distributed under the terms of this License.  Such a notice grants a
+world-wide, royalty-free license, unlimited in duration, to use that
+work under the conditions stated herein.  The "Document", below,
+refers to any such manual or work.  Any member of the public is a
+licensee, and is addressed as "you".  You accept the license if you
+copy, modify or distribute the work in a way requiring permission
+under copyright law.
+
+A "Modified Version" of the Document means any work containing the
+Document or a portion of it, either copied verbatim, or with
+modifications and/or translated into another language.
+
+A "Secondary Section" is a named appendix or a front-matter section of
+the Document that deals exclusively with the relationship of the
+publishers or authors of the Document to the Document's overall subject
+(or to related matters) and contains nothing that could fall directly
+within that overall subject.  (Thus, if the Document is in part a
+textbook of mathematics, a Secondary Section may not explain any
+mathematics.)  The relationship could be a matter of historical
+connection with the subject or with related matters, or of legal,
+commercial, philosophical, ethical or political position regarding
+them.
+
+The "Invariant Sections" are certain Secondary Sections whose titles
+are designated, as being those of Invariant Sections, in the notice
+that says that the Document is released under this License.  If a
+section does not fit the above definition of Secondary then it is not
+allowed to be designated as Invariant.  The Document may contain zero
+Invariant Sections.  If the Document does not identify any Invariant
+Sections then there are none.
+
+The "Cover Texts" are certain short passages of text that are listed,
+as Front-Cover Texts or Back-Cover Texts, in the notice that says that
+the Document is released under this License.  A Front-Cover Text may
+be at most 5 words, and a Back-Cover Text may be at most 25 words.
+
+A "Transparent" copy of the Document means a machine-readable copy,
+represented in a format whose specification is available to the
+general public, that is suitable for revising the document
+straightforwardly with generic text editors or (for images composed of
+pixels) generic paint programs or (for drawings) some widely available
+drawing editor, and that is suitable for input to text formatters or
+for automatic translation to a variety of formats suitable for input
+to text formatters.  A copy made in an otherwise Transparent file
+format whose markup, or absence of markup, has been arranged to thwart
+or discourage subsequent modification by readers is not Transparent.
+An image format is not Transparent if used for any substantial amount
+of text.  A copy that is not "Transparent" is called "Opaque".
+
+Examples of suitable formats for Transparent copies include plain
+ASCII without markup, Texinfo input format, LaTeX input format, SGML
+or XML using a publicly available DTD, and standard-conforming simple
+HTML, PostScript or PDF designed for human modification.  Examples of
+transparent image formats include PNG, XCF and JPG.  Opaque formats
+include proprietary formats that can be read and edited only by
+proprietary word processors, SGML or XML for which the DTD and/or
+processing tools are not generally available, and the
+machine-generated HTML, PostScript or PDF produced by some word
+processors for output purposes only.
+
+The "Title Page" means, for a printed book, the title page itself,
+plus such following pages as are needed to hold, legibly, the material
+this License requires to appear in the title page.  For works in
+formats which do not have any title page as such, "Title Page" means
+the text near the most prominent appearance of the work's title,
+preceding the beginning of the body of the text.
+
+A section "Entitled XYZ" means a named subunit of the Document whose
+title either is precisely XYZ or contains XYZ in parentheses following
+text that translates XYZ in another language.  (Here XYZ stands for a
+specific section name mentioned below, such as "Acknowledgements",
+"Dedications", "Endorsements", or "History".)  To "Preserve the Title"
+of such a section when you modify the Document means that it remains a
+section "Entitled XYZ" according to this definition.
+
+The Document may include Warranty Disclaimers next to the notice which
+states that this License applies to the Document.  These Warranty
+Disclaimers are considered to be included by reference in this
+License, but only as regards disclaiming warranties: any other
+implication that these Warranty Disclaimers may have is void and has
+no effect on the meaning of this License.
+
+
+2. VERBATIM COPYING
+
+You may copy and distribute the Document in any medium, either
+commercially or noncommercially, provided that this License, the
+copyright notices, and the license notice saying this License applies
+to the Document are reproduced in all copies, and that you add no other
+conditions whatsoever to those of this License.  You may not use
+technical measures to obstruct or control the reading or further
+copying of the copies you make or distribute.  However, you may accept
+compensation in exchange for copies.  If you distribute a large enough
+number of copies you must also follow the conditions in section 3.
+
+You may also lend copies, under the same conditions stated above, and
+you may publicly display copies.
+
+
+3. COPYING IN QUANTITY
+
+If you publish printed copies (or copies in media that commonly have
+printed covers) of the Document, numbering more than 100, and the
+Document's license notice requires Cover Texts, you must enclose the
+copies in covers that carry, clearly and legibly, all these Cover
+Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
+the back cover.  Both covers must also clearly and legibly identify
+you as the publisher of these copies.  The front cover must present
+the full title with all words of the title equally prominent and
+visible.  You may add other material on the covers in addition.
+Copying with changes limited to the covers, as long as they preserve
+the title of the Document and satisfy these conditions, can be treated
+as verbatim copying in other respects.
+
+If the required texts for either cover are too voluminous to fit
+legibly, you should put the first ones listed (as many as fit
+reasonably) on the actual cover, and continue the rest onto adjacent
+pages.
+
+If you publish or distribute Opaque copies of the Document numbering
+more than 100, you must either include a machine-readable Transparent
+copy along with each Opaque copy, or state in or with each Opaque copy
+a computer-network location from which the general network-using
+public has access to download using public-standard network protocols
+a complete Transparent copy of the Document, free of added material.
+If you use the latter option, you must take reasonably prudent steps,
+when you begin distribution of Opaque copies in quantity, to ensure
+that this Transparent copy will remain thus accessible at the stated
+location until at least one year after the last time you distribute an
+Opaque copy (directly or through your agents or retailers) of that
+edition to the public.
+
+It is requested, but not required, that you contact the authors of the
+Document well before redistributing any large number of copies, to give
+them a chance to provide you with an updated version of the Document.
+
+
+4. MODIFICATIONS
+
+You may copy and distribute a Modified Version of the Document under
+the conditions of sections 2 and 3 above, provided that you release
+the Modified Version under precisely this License, with the Modified
+Version filling the role of the Document, thus licensing distribution
+and modification of the Modified Version to whoever possesses a copy
+of it.  In addition, you must do these things in the Modified Version:
+
+A. Use in the Title Page (and on the covers, if any) a title distinct
+   from that of the Document, and from those of previous versions
+   (which should, if there were any, be listed in the History section
+   of the Document).  You may use the same title as a previous version
+   if the original publisher of that version gives permission.
+B. List on the Title Page, as authors, one or more persons or entities
+   responsible for authorship of the modifications in the Modified
+   Version, together with at least five of the principal authors of the
+   Document (all of its principal authors, if it has fewer than five),
+   unless they release you from this requirement.
+C. State on the Title page the name of the publisher of the
+   Modified Version, as the publisher.
+D. Preserve all the copyright notices of the Document.
+E. Add an appropriate copyright notice for your modifications
+   adjacent to the other copyright notices.
+F. Include, immediately after the copyright notices, a license notice
+   giving the public permission to use the Modified Version under the
+   terms of this License, in the form shown in the Addendum below.
+G. Preserve in that license notice the full lists of Invariant Sections
+   and required Cover Texts given in the Document's license notice.
+H. Include an unaltered copy of this License.
+I. Preserve the section Entitled "History", Preserve its Title, and add
+   to it an item stating at least the title, year, new authors, and
+   publisher of the Modified Version as given on the Title Page.  If
+   there is no section Entitled "History" in the Document, create one
+   stating the title, year, authors, and publisher of the Document as
+   given on its Title Page, then add an item describing the Modified
+   Version as stated in the previous sentence.
+J. Preserve the network location, if any, given in the Document for
+   public access to a Transparent copy of the Document, and likewise
+   the network locations given in the Document for previous versions
+   it was based on.  These may be placed in the "History" section.
+   You may omit a network location for a work that was published at
+   least four years before the Document itself, or if the original
+   publisher of the version it refers to gives permission.
+K. For any section Entitled "Acknowledgements" or "Dedications",
+   Preserve the Title of the section, and preserve in the section all
+   the substance and tone of each of the contributor acknowledgements
+   and/or dedications given therein.
+L. Preserve all the Invariant Sections of the Document,
+   unaltered in their text and in their titles.  Section numbers
+   or the equivalent are not considered part of the section titles.
+M. Delete any section Entitled "Endorsements".  Such a section
+   may not be included in the Modified Version.
+N. Do not retitle any existing section to be Entitled "Endorsements"
+   or to conflict in title with any Invariant Section.
+O. Preserve any Warranty Disclaimers.
+
+If the Modified Version includes new front-matter sections or
+appendices that qualify as Secondary Sections and contain no material
+copied from the Document, you may at your option designate some or all
+of these sections as invariant.  To do this, add their titles to the
+list of Invariant Sections in the Modified Version's license notice.
+These titles must be distinct from any other section titles.
+
+You may add a section Entitled "Endorsements", provided it contains
+nothing but endorsements of your Modified Version by various
+parties--for example, statements of peer review or that the text has
+been approved by an organization as the authoritative definition of a
+standard.
+
+You may add a passage of up to five words as a Front-Cover Text, and a
+passage of up to 25 words as a Back-Cover Text, to the end of the list
+of Cover Texts in the Modified Version.  Only one passage of
+Front-Cover Text and one of Back-Cover Text may be added by (or
+through arrangements made by) any one entity.  If the Document already
+includes a cover text for the same cover, previously added by you or
+by arrangement made by the same entity you are acting on behalf of,
+you may not add another; but you may replace the old one, on explicit
+permission from the previous publisher that added the old one.
+
+The author(s) and publisher(s) of the Document do not by this License
+give permission to use their names for publicity for or to assert or
+imply endorsement of any Modified Version.
+
+
+5. COMBINING DOCUMENTS
+
+You may combine the Document with other documents released under this
+License, under the terms defined in section 4 above for modified
+versions, provided that you include in the combination all of the
+Invariant Sections of all of the original documents, unmodified, and
+list them all as Invariant Sections of your combined work in its
+license notice, and that you preserve all their Warranty Disclaimers.
+
+The combined work need only contain one copy of this License, and
+multiple identical Invariant Sections may be replaced with a single
+copy.  If there are multiple Invariant Sections with the same name but
+different contents, make the title of each such section unique by
+adding at the end of it, in parentheses, the name of the original
+author or publisher of that section if known, or else a unique number.
+Make the same adjustment to the section titles in the list of
+Invariant Sections in the license notice of the combined work.
+
+In the combination, you must combine any sections Entitled "History"
+in the various original documents, forming one section Entitled
+"History"; likewise combine any sections Entitled "Acknowledgements",
+and any sections Entitled "Dedications".  You must delete all sections
+Entitled "Endorsements".
+
+
+6. COLLECTIONS OF DOCUMENTS
+
+You may make a collection consisting of the Document and other documents
+released under this License, and replace the individual copies of this
+License in the various documents with a single copy that is included in
+the collection, provided that you follow the rules of this License for
+verbatim copying of each of the documents in all other respects.
+
+You may extract a single document from such a collection, and distribute
+it individually under this License, provided you insert a copy of this
+License into the extracted document, and follow this License in all
+other respects regarding verbatim copying of that document.
+
+
+7. AGGREGATION WITH INDEPENDENT WORKS
+
+A compilation of the Document or its derivatives with other separate
+and independent documents or works, in or on a volume of a storage or
+distribution medium, is called an "aggregate" if the copyright
+resulting from the compilation is not used to limit the legal rights
+of the compilation's users beyond what the individual works permit.
+When the Document is included in an aggregate, this License does not
+apply to the other works in the aggregate which are not themselves
+derivative works of the Document.
+
+If the Cover Text requirement of section 3 is applicable to these
+copies of the Document, then if the Document is less than one half of
+the entire aggregate, the Document's Cover Texts may be placed on
+covers that bracket the Document within the aggregate, or the
+electronic equivalent of covers if the Document is in electronic form.
+Otherwise they must appear on printed covers that bracket the whole
+aggregate.
+
+
+8. TRANSLATION
+
+Translation is considered a kind of modification, so you may
+distribute translations of the Document under the terms of section 4.
+Replacing Invariant Sections with translations requires special
+permission from their copyright holders, but you may include
+translations of some or all Invariant Sections in addition to the
+original versions of these Invariant Sections.  You may include a
+translation of this License, and all the license notices in the
+Document, and any Warranty Disclaimers, provided that you also include
+the original English version of this License and the original versions
+of those notices and disclaimers.  In case of a disagreement between
+the translation and the original version of this License or a notice
+or disclaimer, the original version will prevail.
+
+If a section in the Document is Entitled "Acknowledgements",
+"Dedications", or "History", the requirement (section 4) to Preserve
+its Title (section 1) will typically require changing the actual
+title.
+
+
+9. TERMINATION
+
+You may not copy, modify, sublicense, or distribute the Document except
+as expressly provided for under this License.  Any other attempt to
+copy, modify, sublicense or distribute the Document is void, and will
+automatically terminate your rights under this License.  However,
+parties who have received copies, or rights, from you under this
+License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+
+10. FUTURE REVISIONS OF THIS LICENSE
+
+The Free Software Foundation may publish new, revised versions
+of the GNU Free Documentation License from time to time.  Such new
+versions will be similar in spirit to the present version, but may
+differ in detail to address new problems or concerns.  See
+https://www.gnu.org/copyleft/.
+
+Each version of the License is given a distinguishing version number.
+If the Document specifies that a particular numbered version of this
+License "or any later version" applies to it, you have the option of
+following the terms and conditions either of that specified version or
+of any later version that has been published (not as a draft) by the
+Free Software Foundation.  If the Document does not specify a version
+number of this License, you may choose any version ever published (not
+as a draft) by the Free Software Foundation.
+
+
+ADDENDUM: How to use this License for your documents
+
+To use this License in a document you have written, include a copy of
+the License in the document and put the following copyright and
+license notices just after the title page:
+
+    Copyright (c)  YEAR  YOUR NAME.
+    Permission is granted to copy, distribute and/or modify this document
+    under the terms of the GNU Free Documentation License, Version 1.2
+    or any later version published by the Free Software Foundation;
+    with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
+    A copy of the license is included in the section entitled "GNU
+    Free Documentation License".
+
+If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts,
+replace the "with...Texts." line with this:
+
+    with the Invariant Sections being LIST THEIR TITLES, with the
+    Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
+
+If you have Invariant Sections without Cover Texts, or some other
+combination of the three, merge those two alternatives to suit the
+situation.
+
+If your document contains nontrivial examples of program code, we
+recommend releasing these examples in parallel under your choice of
+free software license, such as the GNU General Public License,
+to permit their use in free software.
+
+The End.
diff --git a/Documentation/admin-guide/sysctl/fs.rst b/Documentation/admin-guide/sysctl/fs.rst
index 2a45119..f48277a 100644
--- a/Documentation/admin-guide/sysctl/fs.rst
+++ b/Documentation/admin-guide/sysctl/fs.rst
@@ -261,7 +261,7 @@
 is to cross privilege boundaries when following a given symlink (i.e. a
 root process follows a symlink belonging to another user). For a likely
 incomplete list of hundreds of examples across the years, please see:
-http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=/tmp
+https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=/tmp
 
 When set to "0", symlink following behavior is unrestricted.
 
diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
index 83acf50..d4b32cc 100644
--- a/Documentation/admin-guide/sysctl/kernel.rst
+++ b/Documentation/admin-guide/sysctl/kernel.rst
@@ -164,7 +164,8 @@
 	%s		signal number
 	%t		UNIX time of dump
 	%h		hostname
-	%e		executable filename (may be shortened)
+	%e		executable filename (may be shortened, could be changed by prctl etc)
+	%f      	executable filename
 	%E		executable path
 	%c		maximum size of core file by resource limit RLIMIT_CORE
 	%<OTHER>	both are dropped
@@ -235,7 +236,7 @@
 from using ``dmesg(8)`` to view messages from the kernel's log
 buffer.
 When ``dmesg_restrict`` is set to 0 there are no restrictions.
-When ``dmesg_restrict`` is set set to 1, users must have
+When ``dmesg_restrict`` is set to 1, users must have
 ``CAP_SYSLOG`` to use ``dmesg(8)``.
 
 The kernel config option ``CONFIG_SECURITY_DMESG_RESTRICT`` sets the
@@ -335,8 +336,8 @@
 Default value is "``/sbin/hotplug``".
 
 
-hung_task_all_cpu_backtrace:
-================
+hung_task_all_cpu_backtrace
+===========================
 
 If this option is set, the kernel will send an NMI to all CPUs to dump
 their backtraces when a hung task is detected. This file shows up if
@@ -646,8 +647,8 @@
 scanned for a given scan.
 
 
-oops_all_cpu_backtrace:
-================
+oops_all_cpu_backtrace
+======================
 
 If this option is set, the kernel will send an NMI to all CPUs to dump
 their backtraces when an oops event occurs. It should be used as a last
@@ -996,6 +997,38 @@
 See Documentation/filesystems/devpts.rst.
 
 
+random
+======
+
+This is a directory, with the following entries:
+
+* ``boot_id``: a UUID generated the first time this is retrieved, and
+  unvarying after that;
+
+* ``entropy_avail``: the pool's entropy count, in bits;
+
+* ``poolsize``: the entropy pool size, in bits;
+
+* ``urandom_min_reseed_secs``: obsolete (used to determine the minimum
+  number of seconds between urandom pool reseeding).
+
+* ``uuid``: a UUID generated every time this is retrieved (this can
+  thus be used to generate UUIDs at will);
+
+* ``write_wakeup_threshold``: when the entropy count drops below this
+  (as a number of bits), processes waiting to write to ``/dev/random``
+  are woken up.
+
+If ``drivers/char/random.c`` is built with ``ADD_INTERRUPT_BENCH``
+defined, these additional entries are present:
+
+* ``add_interrupt_avg_cycles``: the average number of cycles between
+  interrupts used to feed the pool;
+
+* ``add_interrupt_avg_deviation``: the standard deviation seen on the
+  number of cycles between interrupts used to feed the pool.
+
+
 randomize_va_space
 ==================
 
@@ -1062,6 +1095,60 @@
 incurs a small amount of overhead in the scheduler but is
 useful for debugging and performance tuning.
 
+sched_util_clamp_min:
+=====================
+
+Max allowed *minimum* utilization.
+
+Default value is 1024, which is the maximum possible value.
+
+It means that any requested uclamp.min value cannot be greater than
+sched_util_clamp_min, i.e., it is restricted to the range
+[0:sched_util_clamp_min].
+
+sched_util_clamp_max:
+=====================
+
+Max allowed *maximum* utilization.
+
+Default value is 1024, which is the maximum possible value.
+
+It means that any requested uclamp.max value cannot be greater than
+sched_util_clamp_max, i.e., it is restricted to the range
+[0:sched_util_clamp_max].
+
+sched_util_clamp_min_rt_default:
+================================
+
+By default Linux is tuned for performance. Which means that RT tasks always run
+at the highest frequency and most capable (highest capacity) CPU (in
+heterogeneous systems).
+
+Uclamp achieves this by setting the requested uclamp.min of all RT tasks to
+1024 by default, which effectively boosts the tasks to run at the highest
+frequency and biases them to run on the biggest CPU.
+
+This knob allows admins to change the default behavior when uclamp is being
+used. In battery powered devices particularly, running at the maximum
+capacity and frequency will increase energy consumption and shorten the battery
+life.
+
+This knob is only effective for RT tasks which the user hasn't modified their
+requested uclamp.min value via sched_setattr() syscall.
+
+This knob will not escape the range constraint imposed by sched_util_clamp_min
+defined above.
+
+For example if
+
+	sched_util_clamp_min_rt_default = 800
+	sched_util_clamp_min = 600
+
+Then the boost will be clamped to 600 because 800 is outside of the permissible
+range of [0:600]. This could happen for instance if a powersave mode will
+restrict all boosts temporarily by modifying sched_util_clamp_min. As soon as
+this restriction is lifted, the requested sched_util_clamp_min_rt_default
+will take effect.
 
 seccomp
 =======
diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
index d46d5b7..4b9d2e8 100644
--- a/Documentation/admin-guide/sysctl/vm.rst
+++ b/Documentation/admin-guide/sysctl/vm.rst
@@ -119,6 +119,21 @@
 blocks where possible. This can be important for example in the allocation of
 huge pages although processes will also directly compact memory as required.
 
+compaction_proactiveness
+========================
+
+This tunable takes a value in the range [0, 100] with a default value of
+20. This tunable determines how aggressively compaction is done in the
+background. Setting it to 0 disables proactive compaction.
+
+Note that compaction has a non-trivial system-wide impact as pages
+belonging to different processes are moved around, which could also lead
+to latency spikes in unsuspecting applications. The kernel employs
+various heuristics to avoid wasting CPU cycles if it detects that
+proactive compaction is not being effective.
+
+Be careful when setting it to extreme values like 100, as that may
+cause excessive background compaction activity.
 
 compact_unevictable_allowed
 ===========================
@@ -583,7 +598,7 @@
 
 The default value is 1.
 
-See Documentation/nommu-mmap.txt for more information.
+See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
 
 
 numa_zonelist_order
diff --git a/Documentation/admin-guide/tainted-kernels.rst b/Documentation/admin-guide/tainted-kernels.rst
index 71e9184..abf8047 100644
--- a/Documentation/admin-guide/tainted-kernels.rst
+++ b/Documentation/admin-guide/tainted-kernels.rst
@@ -38,7 +38,7 @@
 
 	Tainted: P        W  O
 
-The meaning of those characters is explained in the table below. In tis case
+The meaning of those characters is explained in the table below. In this case
 the kernel got tainted earlier because a proprietary Module (``P``) was loaded,
 a warning occurred (``W``), and an externally-built module was loaded (``O``).
 To decode other letters use the table below.
@@ -61,7 +61,7 @@
 	 * Proprietary module was loaded (#0)
 	 * Kernel issued warning (#9)
 	 * Externally-built ('out-of-tree') module was loaded  (#12)
-	See Documentation/admin-guide/tainted-kernels.rst in the the Linux kernel or
+	See Documentation/admin-guide/tainted-kernels.rst in the Linux kernel or
 	 https://www.kernel.org/doc/html/latest/admin-guide/tainted-kernels.html for
 	 a more details explanation of the various taint flags.
 	Raw taint value as int/string: 4609/'P        W  O     '
diff --git a/Documentation/admin-guide/thunderbolt.rst b/Documentation/admin-guide/thunderbolt.rst
index 10c4f0c..613cb24 100644
--- a/Documentation/admin-guide/thunderbolt.rst
+++ b/Documentation/admin-guide/thunderbolt.rst
@@ -173,8 +173,8 @@
 
   ACTION=="add", SUBSYSTEM=="thunderbolt", ATTRS{iommu_dma_protection}=="1", ATTR{authorized}=="0", ATTR{authorized}="1"
 
-Upgrading NVM on Thunderbolt device or host
--------------------------------------------
+Upgrading NVM on Thunderbolt device, host or retimer
+----------------------------------------------------
 Since most of the functionality is handled in firmware running on a
 host controller or a device, it is important that the firmware can be
 upgraded to the latest where possible bugs in it have been fixed.
@@ -185,9 +185,10 @@
 
   `Thunderbolt Updates <https://thunderbolttechnology.net/updates>`_
 
-Before you upgrade firmware on a device or host, please make sure it is a
-suitable upgrade. Failing to do that may render the device (or host) in a
-state where it cannot be used properly anymore without special tools!
+Before you upgrade firmware on a device, host or retimer, please make
+sure it is a suitable upgrade. Failing to do that may render the device
+in a state where it cannot be used properly anymore without special
+tools!
 
 Host NVM upgrade on Apple Macs is not supported.
 
diff --git a/Documentation/admin-guide/xfs.rst b/Documentation/admin-guide/xfs.rst
index ad911be..f461d6c 100644
--- a/Documentation/admin-guide/xfs.rst
+++ b/Documentation/admin-guide/xfs.rst
@@ -133,7 +133,7 @@
 	logbsize must be an integer multiple of the log
 	stripe unit configured at **mkfs(8)** time.
 
-	The default value for for version 1 logs is 32768, while the
+	The default value for version 1 logs is 32768, while the
 	default value for version 2 logs is MAX(32768, log_sunit).
 
   logdev=device and rtdev=device
diff --git a/Documentation/arm/arm.rst b/Documentation/arm/arm.rst
index 2edc509..99d660f 100644
--- a/Documentation/arm/arm.rst
+++ b/Documentation/arm/arm.rst
@@ -184,10 +184,8 @@
   We group machine (or platform) support code into machine classes.  A
   class typically based around one or more system on a chip devices, and
   acts as a natural container around the actual implementations.  These
-  classes are given directories - arch/arm/mach-<class> and
-  arch/arm/mach-<class> - which contain the source files to/include/mach
-  support the machine class.  This directories also contain any machine
-  specific supporting code.
+  classes are given directories - arch/arm/mach-<class> - which contain
+  the source files and include/mach/ to support the machine class.
 
   For example, the SA1100 class is based upon the SA1100 and SA1110 SoC
   devices, and contains the code to support the way the on-board and off-
diff --git a/Documentation/arm/booting.rst b/Documentation/arm/booting.rst
index 4babb6c..a226345 100644
--- a/Documentation/arm/booting.rst
+++ b/Documentation/arm/booting.rst
@@ -128,7 +128,7 @@
 
 The boot loader must load a device tree image (dtb) into system ram
 at a 64bit aligned address and initialize it with the boot data.  The
-dtb format is documented in Documentation/devicetree/booting-without-of.txt.
+dtb format is documented in Documentation/devicetree/booting-without-of.rst.
 The kernel will look for the dtb magic value of 0xd00dfeed at the dtb
 physical address to determine if a dtb has been passed instead of a
 tagged list.
diff --git a/Documentation/arm64/acpi_object_usage.rst b/Documentation/arm64/acpi_object_usage.rst
index d51b69d..377e9d2 100644
--- a/Documentation/arm64/acpi_object_usage.rst
+++ b/Documentation/arm64/acpi_object_usage.rst
@@ -220,7 +220,7 @@
        x86 only table as of ACPI 5.1; starting with ACPI 6.0, processor
        descriptions and power states on ARM platforms should use the DSDT
        and define processor container devices (_HID ACPI0010, Section 8.4,
-       and more specifically 8.4.3 and and 8.4.4).
+       and more specifically 8.4.3 and 8.4.4).
 
 MADT   Section 5.2.12 (signature == "APIC")
 
diff --git a/Documentation/arm64/arm-acpi.rst b/Documentation/arm64/arm-acpi.rst
index 872dbbc..47ecb99 100644
--- a/Documentation/arm64/arm-acpi.rst
+++ b/Documentation/arm64/arm-acpi.rst
@@ -273,7 +273,7 @@
 
    - UUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301
 
-   - http://www.uefi.org/sites/default/files/resources/_DSD-device-properties-UUID.pdf
+   - https://www.uefi.org/sites/default/files/resources/_DSD-device-properties-UUID.pdf
 
 The UEFI Forum provides a mechanism for registering device properties [4]
 so that they may be used across all operating systems supporting ACPI.
@@ -470,7 +470,7 @@
 
 Linux Code
 ----------
-Individual items specific to Linux on ARM, contained in the the Linux
+Individual items specific to Linux on ARM, contained in the Linux
 source code, are in the list that follows:
 
 ACPI_OS_NAME
diff --git a/Documentation/arm64/index.rst b/Documentation/arm64/index.rst
index 09cbb4e..d9665d8 100644
--- a/Documentation/arm64/index.rst
+++ b/Documentation/arm64/index.rst
@@ -14,6 +14,7 @@
     hugetlbpage
     legacy_instructions
     memory
+    perf
     pointer-authentication
     silicon-errata
     sve
diff --git a/Documentation/arm64/perf.rst b/Documentation/arm64/perf.rst
new file mode 100644
index 0000000..9c76a97
--- /dev/null
+++ b/Documentation/arm64/perf.rst
@@ -0,0 +1,88 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=====================
+Perf Event Attributes
+=====================
+
+:Author: Andrew Murray <andrew.murray@arm.com>
+:Date: 2019-03-06
+
+exclude_user
+------------
+
+This attribute excludes userspace.
+
+Userspace always runs at EL0 and thus this attribute will exclude EL0.
+
+
+exclude_kernel
+--------------
+
+This attribute excludes the kernel.
+
+The kernel runs at EL2 with VHE and EL1 without. Guest kernels always run
+at EL1.
+
+For the host this attribute will exclude EL1 and additionally EL2 on a VHE
+system.
+
+For the guest this attribute will exclude EL1. Please note that EL2 is
+never counted within a guest.
+
+
+exclude_hv
+----------
+
+This attribute excludes the hypervisor.
+
+For a VHE host this attribute is ignored as we consider the host kernel to
+be the hypervisor.
+
+For a non-VHE host this attribute will exclude EL2 as we consider the
+hypervisor to be any code that runs at EL2 which is predominantly used for
+guest/host transitions.
+
+For the guest this attribute has no effect. Please note that EL2 is
+never counted within a guest.
+
+
+exclude_host / exclude_guest
+----------------------------
+
+These attributes exclude the KVM host and guest, respectively.
+
+The KVM host may run at EL0 (userspace), EL1 (non-VHE kernel) and EL2 (VHE
+kernel or non-VHE hypervisor).
+
+The KVM guest may run at EL0 (userspace) and EL1 (kernel).
+
+Due to the overlapping exception levels between host and guests we cannot
+exclusively rely on the PMU's hardware exception filtering - therefore we
+must enable/disable counting on the entry and exit to the guest. This is
+performed differently on VHE and non-VHE systems.
+
+For non-VHE systems we exclude EL2 for exclude_host - upon entering and
+exiting the guest we disable/enable the event as appropriate based on the
+exclude_host and exclude_guest attributes.
+
+For VHE systems we exclude EL1 for exclude_guest and exclude both EL0,EL2
+for exclude_host. Upon entering and exiting the guest we modify the event
+to include/exclude EL0 as appropriate based on the exclude_host and
+exclude_guest attributes.
+
+The statements above also apply when these attributes are used within a
+non-VHE guest however please note that EL2 is never counted within a guest.
+
+
+Accuracy
+--------
+
+On non-VHE hosts we enable/disable counters on the entry/exit of host/guest
+transition at EL2 - however there is a period of time between
+enabling/disabling the counters and entering/exiting the guest. We are
+able to eliminate counters counting host events on the boundaries of guest
+entry/exit when counting guest events by filtering out EL2 for
+exclude_host. However when using !exclude_hv there is a small blackout
+window at the guest entry/exit where host events are not captured.
+
+On VHE systems there are no blackout windows.
diff --git a/Documentation/arm64/perf.txt b/Documentation/arm64/perf.txt
deleted file mode 100644
index 0d6a7d8..0000000
--- a/Documentation/arm64/perf.txt
+++ /dev/null
@@ -1,85 +0,0 @@
-Perf Event Attributes
-=====================
-
-Author: Andrew Murray <andrew.murray@arm.com>
-Date: 2019-03-06
-
-exclude_user
-------------
-
-This attribute excludes userspace.
-
-Userspace always runs at EL0 and thus this attribute will exclude EL0.
-
-
-exclude_kernel
---------------
-
-This attribute excludes the kernel.
-
-The kernel runs at EL2 with VHE and EL1 without. Guest kernels always run
-at EL1.
-
-For the host this attribute will exclude EL1 and additionally EL2 on a VHE
-system.
-
-For the guest this attribute will exclude EL1. Please note that EL2 is
-never counted within a guest.
-
-
-exclude_hv
-----------
-
-This attribute excludes the hypervisor.
-
-For a VHE host this attribute is ignored as we consider the host kernel to
-be the hypervisor.
-
-For a non-VHE host this attribute will exclude EL2 as we consider the
-hypervisor to be any code that runs at EL2 which is predominantly used for
-guest/host transitions.
-
-For the guest this attribute has no effect. Please note that EL2 is
-never counted within a guest.
-
-
-exclude_host / exclude_guest
-----------------------------
-
-These attributes exclude the KVM host and guest, respectively.
-
-The KVM host may run at EL0 (userspace), EL1 (non-VHE kernel) and EL2 (VHE
-kernel or non-VHE hypervisor).
-
-The KVM guest may run at EL0 (userspace) and EL1 (kernel).
-
-Due to the overlapping exception levels between host and guests we cannot
-exclusively rely on the PMU's hardware exception filtering - therefore we
-must enable/disable counting on the entry and exit to the guest. This is
-performed differently on VHE and non-VHE systems.
-
-For non-VHE systems we exclude EL2 for exclude_host - upon entering and
-exiting the guest we disable/enable the event as appropriate based on the
-exclude_host and exclude_guest attributes.
-
-For VHE systems we exclude EL1 for exclude_guest and exclude both EL0,EL2
-for exclude_host. Upon entering and exiting the guest we modify the event
-to include/exclude EL0 as appropriate based on the exclude_host and
-exclude_guest attributes.
-
-The statements above also apply when these attributes are used within a
-non-VHE guest however please note that EL2 is never counted within a guest.
-
-
-Accuracy
---------
-
-On non-VHE hosts we enable/disable counters on the entry/exit of host/guest
-transition at EL2 - however there is a period of time between
-enabling/disabling the counters and entering/exiting the guest. We are
-able to eliminate counters counting host events on the boundaries of guest
-entry/exit when counting guest events by filtering out EL2 for
-exclude_host. However when using !exclude_hv there is a small blackout
-window at the guest entry/exit where host events are not captured.
-
-On VHE systems there are no blackout windows.
diff --git a/Documentation/arm64/silicon-errata.rst b/Documentation/arm64/silicon-errata.rst
index 3f7c3a7..d358780 100644
--- a/Documentation/arm64/silicon-errata.rst
+++ b/Documentation/arm64/silicon-errata.rst
@@ -125,6 +125,9 @@
 | Cavium         | ThunderX2 Core  | #219            | CAVIUM_TX2_ERRATUM_219      |
 +----------------+-----------------+-----------------+-----------------------------+
 +----------------+-----------------+-----------------+-----------------------------+
+| Marvell        | ARM-MMU-500     | #582743         | N/A                         |
++----------------+-----------------+-----------------+-----------------------------+
++----------------+-----------------+-----------------+-----------------------------+
 | Freescale/NXP  | LS2080A/LS1043A | A-008585        | FSL_ERRATUM_A008585         |
 +----------------+-----------------+-----------------+-----------------------------+
 +----------------+-----------------+-----------------+-----------------------------+
diff --git a/Documentation/arm64/sve.rst b/Documentation/arm64/sve.rst
index bfd55f4..0313715 100644
--- a/Documentation/arm64/sve.rst
+++ b/Documentation/arm64/sve.rst
@@ -494,7 +494,7 @@
 Note: This section is for information only and not intended to be complete or
 to replace any architectural specification.
 
-Refer to [4] for for more information.
+Refer to [4] for more information.
 
 ARMv8-A defines the following floating-point / SIMD register state:
 
diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index 0ab747e..0f1fded 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -85,22 +85,22 @@
 the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
 and are doing it wrong.
 
-A subtle detail of atomic_set{}() is that it should be observable to the RMW
-ops. That is:
+A note for the implementation of atomic_set{}() is that it must not break the
+atomicity of the RMW ops. That is:
 
-  C atomic-set
+  C Atomic-RMW-ops-are-atomic-WRT-atomic_set
 
   {
-    atomic_set(v, 1);
+    atomic_t v = ATOMIC_INIT(1);
+  }
+
+  P0(atomic_t *v)
+  {
+    (void)atomic_add_unless(v, 1, 0);
   }
 
   P1(atomic_t *v)
   {
-    atomic_add_unless(v, 1, 0);
-  }
-
-  P2(atomic_t *v)
-  {
     atomic_set(v, 0);
   }
 
@@ -233,19 +233,19 @@
 is an ACQUIRE pattern (though very much not typical), but again the barrier is
 strictly stronger than ACQUIRE. As illustrated:
 
-  C strong-acquire
+  C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
 
   {
   }
 
-  P1(int *x, atomic_t *y)
+  P0(int *x, atomic_t *y)
   {
     r0 = READ_ONCE(*x);
     smp_rmb();
     r1 = atomic_read(y);
   }
 
-  P2(int *x, atomic_t *y)
+  P1(int *x, atomic_t *y)
   {
     atomic_inc(y);
     smp_mb__after_atomic();
@@ -253,14 +253,14 @@
   }
 
   exists
-  (r0=1 /\ r1=0)
+  (0:r0=1 /\ 0:r1=0)
 
 This should not happen; but a hypothetical atomic_inc_acquire() --
 (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
 because it would not order the W part of the RMW against the following
 WRITE_ONCE.  Thus:
 
-  P1			P2
+  P0			P1
 
 			t = LL.acq *y (0)
 			t++;
diff --git a/Documentation/block/biodoc.rst b/Documentation/block/biodoc.rst
index b964796..1d4d71e 100644
--- a/Documentation/block/biodoc.rst
+++ b/Documentation/block/biodoc.rst
@@ -196,7 +196,7 @@
 do not have a corresponding kernel virtual address space mapping) and
 low-memory pages.
 
-Note: Please refer to Documentation/DMA-API-HOWTO.txt for a discussion
+Note: Please refer to :doc:`/core-api/dma-api-howto` for a discussion
 on PCI high mem DMA aspects and mapping of scatter gather lists, and support
 for 64 bit PCI.
 
@@ -1036,7 +1036,7 @@
 provides drivers with a sector number relative to whole device, rather than
 having to take partition number into account in order to arrive at the true
 sector number. The routine blk_partition_remap() is invoked by
-generic_make_request even before invoking the queue specific make_request_fn,
+submit_bio_noacct even before invoking the queue specific ->submit_bio,
 so the i/o scheduler also gets to operate on whole disk sector numbers. This
 should typically not require changes to block drivers, it just never gets
 to invoke its own partition sector offset calculations since all bios
diff --git a/Documentation/block/blk-mq.rst b/Documentation/block/blk-mq.rst
new file mode 100644
index 0000000..88c56af
--- /dev/null
+++ b/Documentation/block/blk-mq.rst
@@ -0,0 +1,153 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+================================================
+Multi-Queue Block IO Queueing Mechanism (blk-mq)
+================================================
+
+The Multi-Queue Block IO Queueing Mechanism is an API to enable fast storage
+devices to achieve a huge number of input/output operations per second (IOPS)
+through queueing and submitting IO requests to block devices simultaneously,
+benefiting from the parallelism offered by modern storage devices.
+
+Introduction
+============
+
+Background
+----------
+
+Magnetic hard disks have been the de facto standard from the beginning of the
+development of the kernel. The Block IO subsystem aimed to achieve the best
+performance possible for those devices with a high penalty when doing random
+access, and the bottleneck was the mechanical moving parts, a lot slower than
+any layer on the storage stack. One example of such optimization technique
+involves ordering read/write requests according to the current position of the
+hard disk head.
+
+However, with the development of Solid State Drives and Non-Volatile Memories
+without mechanical parts nor random access penalty and capable of performing
+high parallel access, the bottleneck of the stack had moved from the storage
+device to the operating system. In order to take advantage of the parallelism
+in those devices' design, the multi-queue mechanism was introduced.
+
+The former design had a single queue to store block IO requests with a single
+lock. That did not scale well in SMP systems due to dirty data in cache and the
+bottleneck of having a single lock for multiple processors. This setup also
+suffered with congestion when different processes (or the same process, moving
+to different CPUs) wanted to perform block IO. Instead of this, the blk-mq API
+spawns multiple queues with individual entry points local to the CPU, removing
+the need for a lock. A deeper explanation on how this works is covered in the
+following section (`Operation`_).
+
+Operation
+---------
+
+When the userspace performs IO to a block device (reading or writing a file,
+for instance), blk-mq takes action: it will store and manage IO requests to
+the block device, acting as middleware between the userspace (and a file
+system, if present) and the block device driver.
+
+blk-mq has two group of queues: software staging queues and hardware dispatch
+queues. When the request arrives at the block layer, it will try the shortest
+path possible: send it directly to the hardware queue. However, there are two
+cases that it might not do that: if there's an IO scheduler attached at the
+layer or if we want to try to merge requests. In both cases, requests will be
+sent to the software queue.
+
+Then, after the requests are processed by software queues, they will be placed
+at the hardware queue, a second stage queue were the hardware has direct access
+to process those requests. However, if the hardware does not have enough
+resources to accept more requests, blk-mq will places requests on a temporary
+queue, to be sent in the future, when the hardware is able.
+
+Software staging queues
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The block IO subsystem adds requests  in the software staging queues
+(represented by struct :c:type:`blk_mq_ctx`) in case that they weren't sent
+directly to the driver. A request is one or more BIOs. They arrived at the
+block layer through the data structure struct :c:type:`bio`. The block layer
+will then build a new structure from it, the struct :c:type:`request` that will
+be used to communicate with the device driver. Each queue has its own lock and
+the number of queues is defined by a per-CPU or per-node basis.
+
+The staging queue can be used to merge requests for adjacent sectors. For
+instance, requests for sector 3-6, 6-7, 7-9 can become one request for 3-9.
+Even if random access to SSDs and NVMs have the same time of response compared
+to sequential access, grouped requests for sequential access decreases the
+number of individual requests. This technique of merging requests is called
+plugging.
+
+Along with that, the requests can be reordered to ensure fairness of system
+resources (e.g. to ensure that no application suffers from starvation) and/or to
+improve IO performance, by an IO scheduler.
+
+IO Schedulers
+^^^^^^^^^^^^^
+
+There are several schedulers implemented by the block layer, each one following
+a heuristic to improve the IO performance. They are "pluggable" (as in plug
+and play), in the sense of they can be selected at run time using sysfs. You
+can read more about Linux's IO schedulers `here
+<https://www.kernel.org/doc/html/latest/block/index.html>`_. The scheduling
+happens only between requests in the same queue, so it is not possible to merge
+requests from different queues, otherwise there would be cache trashing and a
+need to have a lock for each queue. After the scheduling, the requests are
+eligible to be sent to the hardware. One of the possible schedulers to be
+selected is the NONE scheduler, the most straightforward one. It will just
+place requests on whatever software queue the process is running on, without
+any reordering. When the device starts processing requests in the hardware
+queue (a.k.a. run the hardware queue), the software queues mapped to that
+hardware queue will be drained in sequence according to their mapping.
+
+Hardware dispatch queues
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The hardware queue (represented by struct :c:type:`blk_mq_hw_ctx`) is a struct
+used by device drivers to map the device submission queues (or device DMA ring
+buffer), and are the last step of the block layer submission code before the
+low level device driver taking ownership of the request. To run this queue, the
+block layer removes requests from the associated software queues and tries to
+dispatch to the hardware.
+
+If it's not possible to send the requests directly to hardware, they will be
+added to a linked list (:c:type:`hctx->dispatch`) of requests. Then,
+next time the block layer runs a queue, it will send the requests laying at the
+:c:type:`dispatch` list first, to ensure a fairness dispatch with those
+requests that were ready to be sent first. The number of hardware queues
+depends on the number of hardware contexts supported by the hardware and its
+device driver, but it will not be more than the number of cores of the system.
+There is no reordering at this stage, and each software queue has a set of
+hardware queues to send requests for.
+
+.. note::
+
+        Neither the block layer nor the device protocols guarantee
+        the order of completion of requests. This must be handled by
+        higher layers, like the filesystem.
+
+Tag-based completion
+~~~~~~~~~~~~~~~~~~~~
+
+In order to indicate which request has been completed, every request is
+identified by an integer, ranging from 0 to the dispatch queue size. This tag
+is generated by the block layer and later reused by the device driver, removing
+the need to create a redundant identifier. When a request is completed in the
+drive, the tag is sent back to the block layer to notify it of the finalization.
+This removes the need to do a linear search to find out which IO has been
+completed.
+
+Further reading
+---------------
+
+- `Linux Block IO: Introducing Multi-queue SSD Access on Multi-core Systems <http://kernel.dk/blk-mq.pdf>`_
+
+- `NOOP scheduler <https://en.wikipedia.org/wiki/Noop_scheduler>`_
+
+- `Null block device driver <https://www.kernel.org/doc/html/latest/block/null_blk.html>`_
+
+Source code documentation
+=========================
+
+.. kernel-doc:: include/linux/blk-mq.h
+
+.. kernel-doc:: block/blk-mq.c
diff --git a/Documentation/block/index.rst b/Documentation/block/index.rst
index 026addf..86dcf71 100644
--- a/Documentation/block/index.rst
+++ b/Documentation/block/index.rst
@@ -10,6 +10,7 @@
    bfq-iosched
    biodoc
    biovecs
+   blk-mq
    capability
    cmdline-partition
    data-integrity
diff --git a/Documentation/block/pr.rst b/Documentation/block/pr.rst
index 30ea1c2..c893d6d 100644
--- a/Documentation/block/pr.rst
+++ b/Documentation/block/pr.rst
@@ -9,7 +9,7 @@
 setup.
 
 This document gives a general overview of the support ioctl commands.
-For a more detailed reference please refer the the SCSI Primary
+For a more detailed reference please refer to the SCSI Primary
 Commands standard, specifically the section on Reservations and the
 "PERSISTENT RESERVE IN" and "PERSISTENT RESERVE OUT" commands.
 
diff --git a/Documentation/block/queue-sysfs.rst b/Documentation/block/queue-sysfs.rst
index 6a8513a..f261a5c 100644
--- a/Documentation/block/queue-sysfs.rst
+++ b/Documentation/block/queue-sysfs.rst
@@ -117,6 +117,20 @@
 data that will be submitted by the block layer core to the associated
 block driver.
 
+max_active_zones (RO)
+---------------------
+For zoned block devices (zoned attribute indicating "host-managed" or
+"host-aware"), the sum of zones belonging to any of the zone states:
+EXPLICIT OPEN, IMPLICIT OPEN or CLOSED, is limited by this value.
+If this value is 0, there is no limit.
+
+max_open_zones (RO)
+-------------------
+For zoned block devices (zoned attribute indicating "host-managed" or
+"host-aware"), the sum of zones belonging to any of the zone states:
+EXPLICIT OPEN or IMPLICIT OPEN, is limited by this value.
+If this value is 0, there is no limit.
+
 max_sectors_kb (RW)
 -------------------
 This is the maximum number of kilobytes that the block layer will allow
diff --git a/Documentation/block/writeback_cache_control.rst b/Documentation/block/writeback_cache_control.rst
index 2c752c5..b208488 100644
--- a/Documentation/block/writeback_cache_control.rst
+++ b/Documentation/block/writeback_cache_control.rst
@@ -47,7 +47,7 @@
 may both be set on a single bio.
 
 
-Implementation details for make_request_fn based block drivers
+Implementation details for bio based block drivers
 --------------------------------------------------------------
 
 These drivers will always see the REQ_PREFLUSH and REQ_FUA bits as they sit
diff --git a/Documentation/bpf/bpf_design_QA.rst b/Documentation/bpf/bpf_design_QA.rst
index 12a246f..2df7b06 100644
--- a/Documentation/bpf/bpf_design_QA.rst
+++ b/Documentation/bpf/bpf_design_QA.rst
@@ -246,17 +246,6 @@
 this helper is only useful for experiments and prototypes.
 Tracing BPF programs are root only.
 
-Q: bpf_trace_printk() helper warning
-------------------------------------
-Q: When bpf_trace_printk() helper is used the kernel prints nasty
-warning message. Why is that?
-
-A: This is done to nudge program authors into better interfaces when
-programs need to pass data to user space. Like bpf_perf_event_output()
-can be used to efficiently stream data via perf ring buffer.
-BPF maps can be used for asynchronous data sharing between kernel
-and user space. bpf_trace_printk() should only be used for debugging.
-
 Q: New functionality via kernel modules?
 ----------------------------------------
 Q: Can BPF functionality such as new program or map types, new
diff --git a/Documentation/bpf/bpf_devel_QA.rst b/Documentation/bpf/bpf_devel_QA.rst
index 0b3db91..a26aa1b 100644
--- a/Documentation/bpf/bpf_devel_QA.rst
+++ b/Documentation/bpf/bpf_devel_QA.rst
@@ -643,5 +643,6 @@
 .. _selftests: ../../tools/testing/selftests/bpf/
 .. _Documentation/dev-tools/kselftest.rst:
    https://www.kernel.org/doc/html/latest/dev-tools/kselftest.html
+.. _Documentation/bpf/btf.rst: btf.rst
 
 Happy BPF hacking!
diff --git a/Documentation/bpf/btf.rst b/Documentation/bpf/btf.rst
index 4d565d2..b5361b8 100644
--- a/Documentation/bpf/btf.rst
+++ b/Documentation/bpf/btf.rst
@@ -691,6 +691,42 @@
 bpf_insn``. For ELF API, the ``insn_off`` is the byte offset from the
 beginning of section (``btf_ext_info_sec->sec_name_off``).
 
+4.2 .BTF_ids section
+====================
+
+The .BTF_ids section encodes BTF ID values that are used within the kernel.
+
+This section is created during the kernel compilation with the help of
+macros defined in ``include/linux/btf_ids.h`` header file. Kernel code can
+use them to create lists and sets (sorted lists) of BTF ID values.
+
+The ``BTF_ID_LIST`` and ``BTF_ID`` macros define unsorted list of BTF ID values,
+with following syntax::
+
+  BTF_ID_LIST(list)
+  BTF_ID(type1, name1)
+  BTF_ID(type2, name2)
+
+resulting in following layout in .BTF_ids section::
+
+  __BTF_ID__type1__name1__1:
+  .zero 4
+  __BTF_ID__type2__name2__2:
+  .zero 4
+
+The ``u32 list[];`` variable is defined to access the list.
+
+The ``BTF_ID_UNUSED`` macro defines 4 zero bytes. It's used when we
+want to define unused entry in BTF_ID_LIST, like::
+
+      BTF_ID_LIST(bpf_skb_output_btf_ids)
+      BTF_ID(struct, sk_buff)
+      BTF_ID_UNUSED
+      BTF_ID(struct, task_struct)
+
+All the BTF ID lists and sets are compiled in the .BTF_ids section and
+resolved during the linking phase of kernel build by ``resolve_btfids`` tool.
+
 5. Using BTF
 ************
 
diff --git a/Documentation/bpf/index.rst b/Documentation/bpf/index.rst
index 38b4db8..7df2465 100644
--- a/Documentation/bpf/index.rst
+++ b/Documentation/bpf/index.rst
@@ -5,10 +5,10 @@
 This directory contains documentation for the BPF (Berkeley Packet
 Filter) facility, with a focus on the extended BPF version (eBPF).
 
-This kernel side documentation is still work in progress.  The main
+This kernel side documentation is still work in progress. The main
 textual documentation is (for historical reasons) described in
-`Documentation/networking/filter.rst`_, which describe both classical
-and extended BPF instruction-set.
+:ref:`networking-filter`, which describe both classical and extended
+BPF instruction-set.
 The Cilium project also maintains a `BPF and XDP Reference Guide`_
 that goes into great technical depth about the BPF Architecture.
 
@@ -36,6 +36,12 @@
    bpf_devel_QA
 
 
+Helper functions
+================
+
+* `bpf-helpers(7)`_ maintains a list of helpers available to eBPF programs.
+
+
 Program types
 =============
 
@@ -48,6 +54,15 @@
    bpf_lsm
 
 
+Map types
+=========
+
+.. toctree::
+   :maxdepth: 1
+
+   map_cgroup_storage
+
+
 Testing and debugging BPF
 =========================
 
@@ -58,8 +73,17 @@
    s390
 
 
+Other
+=====
+
+.. toctree::
+   :maxdepth: 1
+
+   ringbuf
+
 .. Links:
-.. _Documentation/networking/filter.rst: ../networking/filter.txt
+.. _networking-filter: ../networking/filter.rst
 .. _man-pages: https://www.kernel.org/doc/man-pages/
-.. _bpf(2): http://man7.org/linux/man-pages/man2/bpf.2.html
-.. _BPF and XDP Reference Guide: http://cilium.readthedocs.io/en/latest/bpf/
+.. _bpf(2): https://man7.org/linux/man-pages/man2/bpf.2.html
+.. _bpf-helpers(7): https://man7.org/linux/man-pages/man7/bpf-helpers.7.html
+.. _BPF and XDP Reference Guide: https://docs.cilium.io/en/latest/bpf/
diff --git a/Documentation/bpf/map_cgroup_storage.rst b/Documentation/bpf/map_cgroup_storage.rst
new file mode 100644
index 0000000..cab9543
--- /dev/null
+++ b/Documentation/bpf/map_cgroup_storage.rst
@@ -0,0 +1,169 @@
+.. SPDX-License-Identifier: GPL-2.0-only
+.. Copyright (C) 2020 Google LLC.
+
+===========================
+BPF_MAP_TYPE_CGROUP_STORAGE
+===========================
+
+The ``BPF_MAP_TYPE_CGROUP_STORAGE`` map type represents a local fix-sized
+storage. It is only available with ``CONFIG_CGROUP_BPF``, and to programs that
+attach to cgroups; the programs are made available by the same Kconfig. The
+storage is identified by the cgroup the program is attached to.
+
+The map provide a local storage at the cgroup that the BPF program is attached
+to. It provides a faster and simpler access than the general purpose hash
+table, which performs a hash table lookups, and requires user to track live
+cgroups on their own.
+
+This document describes the usage and semantics of the
+``BPF_MAP_TYPE_CGROUP_STORAGE`` map type. Some of its behaviors was changed in
+Linux 5.9 and this document will describe the differences.
+
+Usage
+=====
+
+The map uses key of type of either ``__u64 cgroup_inode_id`` or
+``struct bpf_cgroup_storage_key``, declared in ``linux/bpf.h``::
+
+    struct bpf_cgroup_storage_key {
+            __u64 cgroup_inode_id;
+            __u32 attach_type;
+    };
+
+``cgroup_inode_id`` is the inode id of the cgroup directory.
+``attach_type`` is the the program's attach type.
+
+Linux 5.9 added support for type ``__u64 cgroup_inode_id`` as the key type.
+When this key type is used, then all attach types of the particular cgroup and
+map will share the same storage. Otherwise, if the type is
+``struct bpf_cgroup_storage_key``, then programs of different attach types
+be isolated and see different storages.
+
+To access the storage in a program, use ``bpf_get_local_storage``::
+
+    void *bpf_get_local_storage(void *map, u64 flags)
+
+``flags`` is reserved for future use and must be 0.
+
+There is no implicit synchronization. Storages of ``BPF_MAP_TYPE_CGROUP_STORAGE``
+can be accessed by multiple programs across different CPUs, and user should
+take care of synchronization by themselves. The bpf infrastructure provides
+``struct bpf_spin_lock`` to synchronize the storage. See
+``tools/testing/selftests/bpf/progs/test_spin_lock.c``.
+
+Examples
+========
+
+Usage with key type as ``struct bpf_cgroup_storage_key``::
+
+    #include <bpf/bpf.h>
+
+    struct {
+            __uint(type, BPF_MAP_TYPE_CGROUP_STORAGE);
+            __type(key, struct bpf_cgroup_storage_key);
+            __type(value, __u32);
+    } cgroup_storage SEC(".maps");
+
+    int program(struct __sk_buff *skb)
+    {
+            __u32 *ptr = bpf_get_local_storage(&cgroup_storage, 0);
+            __sync_fetch_and_add(ptr, 1);
+
+            return 0;
+    }
+
+Userspace accessing map declared above::
+
+    #include <linux/bpf.h>
+    #include <linux/libbpf.h>
+
+    __u32 map_lookup(struct bpf_map *map, __u64 cgrp, enum bpf_attach_type type)
+    {
+            struct bpf_cgroup_storage_key = {
+                    .cgroup_inode_id = cgrp,
+                    .attach_type = type,
+            };
+            __u32 value;
+            bpf_map_lookup_elem(bpf_map__fd(map), &key, &value);
+            // error checking omitted
+            return value;
+    }
+
+Alternatively, using just ``__u64 cgroup_inode_id`` as key type::
+
+    #include <bpf/bpf.h>
+
+    struct {
+            __uint(type, BPF_MAP_TYPE_CGROUP_STORAGE);
+            __type(key, __u64);
+            __type(value, __u32);
+    } cgroup_storage SEC(".maps");
+
+    int program(struct __sk_buff *skb)
+    {
+            __u32 *ptr = bpf_get_local_storage(&cgroup_storage, 0);
+            __sync_fetch_and_add(ptr, 1);
+
+            return 0;
+    }
+
+And userspace::
+
+    #include <linux/bpf.h>
+    #include <linux/libbpf.h>
+
+    __u32 map_lookup(struct bpf_map *map, __u64 cgrp, enum bpf_attach_type type)
+    {
+            __u32 value;
+            bpf_map_lookup_elem(bpf_map__fd(map), &cgrp, &value);
+            // error checking omitted
+            return value;
+    }
+
+Semantics
+=========
+
+``BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE`` is a variant of this map type. This
+per-CPU variant will have different memory regions for each CPU for each
+storage. The non-per-CPU will have the same memory region for each storage.
+
+Prior to Linux 5.9, the lifetime of a storage is precisely per-attachment, and
+for a single ``CGROUP_STORAGE`` map, there can be at most one program loaded
+that uses the map. A program may be attached to multiple cgroups or have
+multiple attach types, and each attach creates a fresh zeroed storage. The
+storage is freed upon detach.
+
+There is a one-to-one association between the map of each type (per-CPU and
+non-per-CPU) and the BPF program during load verification time. As a result,
+each map can only be used by one BPF program and each BPF program can only use
+one storage map of each type. Because of map can only be used by one BPF
+program, sharing of this cgroup's storage with other BPF programs were
+impossible.
+
+Since Linux 5.9, storage can be shared by multiple programs. When a program is
+attached to a cgroup, the kernel would create a new storage only if the map
+does not already contain an entry for the cgroup and attach type pair, or else
+the old storage is reused for the new attachment. If the map is attach type
+shared, then attach type is simply ignored during comparison. Storage is freed
+only when either the map or the cgroup attached to is being freed. Detaching
+will not directly free the storage, but it may cause the reference to the map
+to reach zero and indirectly freeing all storage in the map.
+
+The map is not associated with any BPF program, thus making sharing possible.
+However, the BPF program can still only associate with one map of each type
+(per-CPU and non-per-CPU). A BPF program cannot use more than one
+``BPF_MAP_TYPE_CGROUP_STORAGE`` or more than one
+``BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE``.
+
+In all versions, userspace may use the the attach parameters of cgroup and
+attach type pair in ``struct bpf_cgroup_storage_key`` as the key to the BPF map
+APIs to read or update the storage for a given attachment. For Linux 5.9
+attach type shared storages, only the first value in the struct, cgroup inode
+id, is used during comparison, so userspace may just specify a ``__u64``
+directly.
+
+The storage is bound at attach time. Even if the program is attached to parent
+and triggers in child, the storage still belongs to the parent.
+
+Userspace cannot create a new entry in the map or delete an existing entry.
+Program test runs always use a temporary storage.
diff --git a/Documentation/bus-virt-phys-mapping.txt b/Documentation/bus-virt-phys-mapping.txt
deleted file mode 100644
index 4bb07c2..0000000
--- a/Documentation/bus-virt-phys-mapping.txt
+++ /dev/null
@@ -1,220 +0,0 @@
-==========================================================
-How to access I/O mapped memory from within device drivers
-==========================================================
-
-:Author: Linus
-
-.. warning::
-
-	The virt_to_bus() and bus_to_virt() functions have been
-	superseded by the functionality provided by the PCI DMA interface
-	(see Documentation/DMA-API-HOWTO.txt).  They continue
-	to be documented below for historical purposes, but new code
-	must not use them. --davidm 00/12/12
-
-::
-
-  [ This is a mail message in response to a query on IO mapping, thus the
-    strange format for a "document" ]
-
-The AHA-1542 is a bus-master device, and your patch makes the driver give the
-controller the physical address of the buffers, which is correct on x86
-(because all bus master devices see the physical memory mappings directly). 
-
-However, on many setups, there are actually **three** different ways of looking
-at memory addresses, and in this case we actually want the third, the
-so-called "bus address". 
-
-Essentially, the three ways of addressing memory are (this is "real memory",
-that is, normal RAM--see later about other details): 
-
- - CPU untranslated.  This is the "physical" address.  Physical address 
-   0 is what the CPU sees when it drives zeroes on the memory bus.
-
- - CPU translated address. This is the "virtual" address, and is 
-   completely internal to the CPU itself with the CPU doing the appropriate
-   translations into "CPU untranslated". 
-
- - bus address. This is the address of memory as seen by OTHER devices, 
-   not the CPU. Now, in theory there could be many different bus 
-   addresses, with each device seeing memory in some device-specific way, but
-   happily most hardware designers aren't actually actively trying to make
-   things any more complex than necessary, so you can assume that all 
-   external hardware sees the memory the same way. 
-
-Now, on normal PCs the bus address is exactly the same as the physical
-address, and things are very simple indeed. However, they are that simple
-because the memory and the devices share the same address space, and that is
-not generally necessarily true on other PCI/ISA setups. 
-
-Now, just as an example, on the PReP (PowerPC Reference Platform), the 
-CPU sees a memory map something like this (this is from memory)::
-
-	0-2 GB		"real memory"
-	2 GB-3 GB	"system IO" (inb/out and similar accesses on x86)
-	3 GB-4 GB 	"IO memory" (shared memory over the IO bus)
-
-Now, that looks simple enough. However, when you look at the same thing from
-the viewpoint of the devices, you have the reverse, and the physical memory
-address 0 actually shows up as address 2 GB for any IO master.
-
-So when the CPU wants any bus master to write to physical memory 0, it 
-has to give the master address 0x80000000 as the memory address.
-
-So, for example, depending on how the kernel is actually mapped on the 
-PPC, you can end up with a setup like this::
-
- physical address:	0
- virtual address:	0xC0000000
- bus address:		0x80000000
-
-where all the addresses actually point to the same thing.  It's just seen 
-through different translations..
-
-Similarly, on the Alpha, the normal translation is::
-
- physical address:	0
- virtual address:	0xfffffc0000000000
- bus address:		0x40000000
-
-(but there are also Alphas where the physical address and the bus address
-are the same). 
-
-Anyway, the way to look up all these translations, you do::
-
-	#include <asm/io.h>
-
-	phys_addr = virt_to_phys(virt_addr);
-	virt_addr = phys_to_virt(phys_addr);
-	 bus_addr = virt_to_bus(virt_addr);
-	virt_addr = bus_to_virt(bus_addr);
-
-Now, when do you need these?
-
-You want the **virtual** address when you are actually going to access that
-pointer from the kernel. So you can have something like this::
-
-	/*
-	 * this is the hardware "mailbox" we use to communicate with
-	 * the controller. The controller sees this directly.
-	 */
-	struct mailbox {
-		__u32 status;
-		__u32 bufstart;
-		__u32 buflen;
-		..
-	} mbox;
-
-		unsigned char * retbuffer;
-
-		/* get the address from the controller */
-		retbuffer = bus_to_virt(mbox.bufstart);
-		switch (retbuffer[0]) {
-			case STATUS_OK:
-				...
-
-on the other hand, you want the bus address when you have a buffer that 
-you want to give to the controller::
-
-	/* ask the controller to read the sense status into "sense_buffer" */
-	mbox.bufstart = virt_to_bus(&sense_buffer);
-	mbox.buflen = sizeof(sense_buffer);
-	mbox.status = 0;
-	notify_controller(&mbox);
-
-And you generally **never** want to use the physical address, because you can't
-use that from the CPU (the CPU only uses translated virtual addresses), and
-you can't use it from the bus master. 
-
-So why do we care about the physical address at all? We do need the physical
-address in some cases, it's just not very often in normal code.  The physical
-address is needed if you use memory mappings, for example, because the
-"remap_pfn_range()" mm function wants the physical address of the memory to
-be remapped as measured in units of pages, a.k.a. the pfn (the memory
-management layer doesn't know about devices outside the CPU, so it
-shouldn't need to know about "bus addresses" etc).
-
-.. note::
-
-	The above is only one part of the whole equation. The above
-	only talks about "real memory", that is, CPU memory (RAM).
-
-There is a completely different type of memory too, and that's the "shared
-memory" on the PCI or ISA bus. That's generally not RAM (although in the case
-of a video graphics card it can be normal DRAM that is just used for a frame
-buffer), but can be things like a packet buffer in a network card etc. 
-
-This memory is called "PCI memory" or "shared memory" or "IO memory" or
-whatever, and there is only one way to access it: the readb/writeb and
-related functions. You should never take the address of such memory, because
-there is really nothing you can do with such an address: it's not
-conceptually in the same memory space as "real memory" at all, so you cannot
-just dereference a pointer. (Sadly, on x86 it **is** in the same memory space,
-so on x86 it actually works to just deference a pointer, but it's not
-portable). 
-
-For such memory, you can do things like:
-
- - reading::
-
-	/*
-	 * read first 32 bits from ISA memory at 0xC0000, aka
-	 * C000:0000 in DOS terms
-	 */
-	unsigned int signature = isa_readl(0xC0000);
-
- - remapping and writing::
-
-	/*
-	 * remap framebuffer PCI memory area at 0xFC000000,
-	 * size 1MB, so that we can access it: We can directly
-	 * access only the 640k-1MB area, so anything else
-	 * has to be remapped.
-	 */
-	void __iomem *baseptr = ioremap(0xFC000000, 1024*1024);
-
-	/* write a 'A' to the offset 10 of the area */
-	writeb('A',baseptr+10);
-
-	/* unmap when we unload the driver */
-	iounmap(baseptr);
-
- - copying and clearing::
-
-	/* get the 6-byte Ethernet address at ISA address E000:0040 */
-	memcpy_fromio(kernel_buffer, 0xE0040, 6);
-	/* write a packet to the driver */
-	memcpy_toio(0xE1000, skb->data, skb->len);
-	/* clear the frame buffer */
-	memset_io(0xA0000, 0, 0x10000);
-
-OK, that just about covers the basics of accessing IO portably.  Questions?
-Comments? You may think that all the above is overly complex, but one day you
-might find yourself with a 500 MHz Alpha in front of you, and then you'll be
-happy that your driver works ;)
-
-Note that kernel versions 2.0.x (and earlier) mistakenly called the
-ioremap() function "vremap()".  ioremap() is the proper name, but I
-didn't think straight when I wrote it originally.  People who have to
-support both can do something like::
- 
-	/* support old naming silliness */
-	#if LINUX_VERSION_CODE < 0x020100
-	#define ioremap vremap
-	#define iounmap vfree                                                     
-	#endif
- 
-at the top of their source files, and then they can use the right names
-even on 2.0.x systems. 
-
-And the above sounds worse than it really is.  Most real drivers really
-don't do all that complex things (or rather: the complexity is not so
-much in the actual IO accesses as in error handling and timeouts etc). 
-It's generally not hard to fix drivers, and in many cases the code
-actually looks better afterwards::
-
-	unsigned long signature = *(unsigned int *) 0xC0000;
-		vs
-	unsigned long signature = readl(0xC0000);
-
-I think the second version actually is more readable, no?
diff --git a/Documentation/cdrom/cdrom-standard.rst b/Documentation/cdrom/cdrom-standard.rst
index dde4f7f..70500b1 100644
--- a/Documentation/cdrom/cdrom-standard.rst
+++ b/Documentation/cdrom/cdrom-standard.rst
@@ -157,7 +157,6 @@
 		cdrom_release,		/∗ release ∗/
 		NULL,			/∗ fsync ∗/
 		NULL,			/∗ fasync ∗/
-		cdrom_media_changed,	/∗ media change ∗/
 		NULL			/∗ revalidate ∗/
 	};
 
@@ -368,19 +367,6 @@
 
 ::
 
-	int media_changed(struct cdrom_device_info *cdi, int disc_nr)
-
-This function is very similar to the original function in $struct
-file_operations*. It returns 1 if the medium of the device *cdi->dev*
-has changed since the last call, and 0 otherwise. The parameter
-*disc_nr* identifies a specific slot in a juke-box, it should be
-ignored for single-disc drives. Note that by `re-routing` this
-function through *cdrom_media_changed()*, we can implement separate
-queues for the VFS and a new *ioctl()* function that can report device
-changes to software (e. g., an auto-mounting daemon).
-
-::
-
 	int tray_move(struct cdrom_device_info *cdi, int position)
 
 This function, if implemented, should control the tray movement. (No
@@ -571,7 +557,7 @@
 	CDC_DRIVE_STATUS	/* driver implements drive status */
 
 The capability flag is declared *const*, to prevent drivers from
-accidentally tampering with the contents. The capability fags actually
+accidentally tampering with the contents. The capability flags actually
 inform `cdrom.c` of what the driver can do. If the drive found
 by the driver does not have the capability, is can be masked out by
 the *cdrom_device_info* variable *mask*. For instance, the SCSI CD-ROM
@@ -750,7 +736,7 @@
 
 Only a few routines in `cdrom.c` are exported to the drivers. In this
 new section we will discuss these, as well as the functions that `take
-over' the CD-ROM interface to the kernel. The header file belonging
+over` the CD-ROM interface to the kernel. The header file belonging
 to `cdrom.c` is called `cdrom.h`. Formerly, some of the contents of this
 file were placed in the file `ucdrom.h`, but this file has now been
 merged back into `cdrom.h`.
@@ -917,9 +903,7 @@
 	maximum number of discs in the juke-box found in the *cdrom_dops*.
 `CDROM_MEDIA_CHANGED`
 	Returns 1 if a disc has been changed since the last call.
-	Note that calls to *cdrom_media_changed* by the VFS are treated
-	by an independent queue, so both mechanisms will detect a
-	media change once. For juke-boxes, an extra argument *arg*
+	For juke-boxes, an extra argument *arg*
 	specifies the slot for which the information is given. The special
 	value *CDSL_CURRENT* requests that information about the currently
 	selected slot be returned.
diff --git a/Documentation/core-api/bus-virt-phys-mapping.rst b/Documentation/core-api/bus-virt-phys-mapping.rst
new file mode 100644
index 0000000..c7bc99c
--- /dev/null
+++ b/Documentation/core-api/bus-virt-phys-mapping.rst
@@ -0,0 +1,220 @@
+==========================================================
+How to access I/O mapped memory from within device drivers
+==========================================================
+
+:Author: Linus
+
+.. warning::
+
+	The virt_to_bus() and bus_to_virt() functions have been
+	superseded by the functionality provided by the PCI DMA interface
+	(see :doc:`/core-api/dma-api-howto`).  They continue
+	to be documented below for historical purposes, but new code
+	must not use them. --davidm 00/12/12
+
+::
+
+  [ This is a mail message in response to a query on IO mapping, thus the
+    strange format for a "document" ]
+
+The AHA-1542 is a bus-master device, and your patch makes the driver give the
+controller the physical address of the buffers, which is correct on x86
+(because all bus master devices see the physical memory mappings directly). 
+
+However, on many setups, there are actually **three** different ways of looking
+at memory addresses, and in this case we actually want the third, the
+so-called "bus address". 
+
+Essentially, the three ways of addressing memory are (this is "real memory",
+that is, normal RAM--see later about other details): 
+
+ - CPU untranslated.  This is the "physical" address.  Physical address 
+   0 is what the CPU sees when it drives zeroes on the memory bus.
+
+ - CPU translated address. This is the "virtual" address, and is 
+   completely internal to the CPU itself with the CPU doing the appropriate
+   translations into "CPU untranslated". 
+
+ - bus address. This is the address of memory as seen by OTHER devices, 
+   not the CPU. Now, in theory there could be many different bus 
+   addresses, with each device seeing memory in some device-specific way, but
+   happily most hardware designers aren't actually actively trying to make
+   things any more complex than necessary, so you can assume that all 
+   external hardware sees the memory the same way. 
+
+Now, on normal PCs the bus address is exactly the same as the physical
+address, and things are very simple indeed. However, they are that simple
+because the memory and the devices share the same address space, and that is
+not generally necessarily true on other PCI/ISA setups. 
+
+Now, just as an example, on the PReP (PowerPC Reference Platform), the 
+CPU sees a memory map something like this (this is from memory)::
+
+	0-2 GB		"real memory"
+	2 GB-3 GB	"system IO" (inb/out and similar accesses on x86)
+	3 GB-4 GB 	"IO memory" (shared memory over the IO bus)
+
+Now, that looks simple enough. However, when you look at the same thing from
+the viewpoint of the devices, you have the reverse, and the physical memory
+address 0 actually shows up as address 2 GB for any IO master.
+
+So when the CPU wants any bus master to write to physical memory 0, it 
+has to give the master address 0x80000000 as the memory address.
+
+So, for example, depending on how the kernel is actually mapped on the 
+PPC, you can end up with a setup like this::
+
+ physical address:	0
+ virtual address:	0xC0000000
+ bus address:		0x80000000
+
+where all the addresses actually point to the same thing.  It's just seen 
+through different translations..
+
+Similarly, on the Alpha, the normal translation is::
+
+ physical address:	0
+ virtual address:	0xfffffc0000000000
+ bus address:		0x40000000
+
+(but there are also Alphas where the physical address and the bus address
+are the same). 
+
+Anyway, the way to look up all these translations, you do::
+
+	#include <asm/io.h>
+
+	phys_addr = virt_to_phys(virt_addr);
+	virt_addr = phys_to_virt(phys_addr);
+	 bus_addr = virt_to_bus(virt_addr);
+	virt_addr = bus_to_virt(bus_addr);
+
+Now, when do you need these?
+
+You want the **virtual** address when you are actually going to access that
+pointer from the kernel. So you can have something like this::
+
+	/*
+	 * this is the hardware "mailbox" we use to communicate with
+	 * the controller. The controller sees this directly.
+	 */
+	struct mailbox {
+		__u32 status;
+		__u32 bufstart;
+		__u32 buflen;
+		..
+	} mbox;
+
+		unsigned char * retbuffer;
+
+		/* get the address from the controller */
+		retbuffer = bus_to_virt(mbox.bufstart);
+		switch (retbuffer[0]) {
+			case STATUS_OK:
+				...
+
+on the other hand, you want the bus address when you have a buffer that 
+you want to give to the controller::
+
+	/* ask the controller to read the sense status into "sense_buffer" */
+	mbox.bufstart = virt_to_bus(&sense_buffer);
+	mbox.buflen = sizeof(sense_buffer);
+	mbox.status = 0;
+	notify_controller(&mbox);
+
+And you generally **never** want to use the physical address, because you can't
+use that from the CPU (the CPU only uses translated virtual addresses), and
+you can't use it from the bus master. 
+
+So why do we care about the physical address at all? We do need the physical
+address in some cases, it's just not very often in normal code.  The physical
+address is needed if you use memory mappings, for example, because the
+"remap_pfn_range()" mm function wants the physical address of the memory to
+be remapped as measured in units of pages, a.k.a. the pfn (the memory
+management layer doesn't know about devices outside the CPU, so it
+shouldn't need to know about "bus addresses" etc).
+
+.. note::
+
+	The above is only one part of the whole equation. The above
+	only talks about "real memory", that is, CPU memory (RAM).
+
+There is a completely different type of memory too, and that's the "shared
+memory" on the PCI or ISA bus. That's generally not RAM (although in the case
+of a video graphics card it can be normal DRAM that is just used for a frame
+buffer), but can be things like a packet buffer in a network card etc. 
+
+This memory is called "PCI memory" or "shared memory" or "IO memory" or
+whatever, and there is only one way to access it: the readb/writeb and
+related functions. You should never take the address of such memory, because
+there is really nothing you can do with such an address: it's not
+conceptually in the same memory space as "real memory" at all, so you cannot
+just dereference a pointer. (Sadly, on x86 it **is** in the same memory space,
+so on x86 it actually works to just deference a pointer, but it's not
+portable). 
+
+For such memory, you can do things like:
+
+ - reading::
+
+	/*
+	 * read first 32 bits from ISA memory at 0xC0000, aka
+	 * C000:0000 in DOS terms
+	 */
+	unsigned int signature = isa_readl(0xC0000);
+
+ - remapping and writing::
+
+	/*
+	 * remap framebuffer PCI memory area at 0xFC000000,
+	 * size 1MB, so that we can access it: We can directly
+	 * access only the 640k-1MB area, so anything else
+	 * has to be remapped.
+	 */
+	void __iomem *baseptr = ioremap(0xFC000000, 1024*1024);
+
+	/* write a 'A' to the offset 10 of the area */
+	writeb('A',baseptr+10);
+
+	/* unmap when we unload the driver */
+	iounmap(baseptr);
+
+ - copying and clearing::
+
+	/* get the 6-byte Ethernet address at ISA address E000:0040 */
+	memcpy_fromio(kernel_buffer, 0xE0040, 6);
+	/* write a packet to the driver */
+	memcpy_toio(0xE1000, skb->data, skb->len);
+	/* clear the frame buffer */
+	memset_io(0xA0000, 0, 0x10000);
+
+OK, that just about covers the basics of accessing IO portably.  Questions?
+Comments? You may think that all the above is overly complex, but one day you
+might find yourself with a 500 MHz Alpha in front of you, and then you'll be
+happy that your driver works ;)
+
+Note that kernel versions 2.0.x (and earlier) mistakenly called the
+ioremap() function "vremap()".  ioremap() is the proper name, but I
+didn't think straight when I wrote it originally.  People who have to
+support both can do something like::
+ 
+	/* support old naming silliness */
+	#if LINUX_VERSION_CODE < 0x020100
+	#define ioremap vremap
+	#define iounmap vfree                                                     
+	#endif
+ 
+at the top of their source files, and then they can use the right names
+even on 2.0.x systems. 
+
+And the above sounds worse than it really is.  Most real drivers really
+don't do all that complex things (or rather: the complexity is not so
+much in the actual IO accesses as in error handling and timeouts etc). 
+It's generally not hard to fix drivers, and in many cases the code
+actually looks better afterwards::
+
+	unsigned long signature = *(unsigned int *) 0xC0000;
+		vs
+	unsigned long signature = readl(0xC0000);
+
+I think the second version actually is more readable, no?
diff --git a/Documentation/core-api/cpu_hotplug.rst b/Documentation/core-api/cpu_hotplug.rst
index 4a50ab7..298c9c8 100644
--- a/Documentation/core-api/cpu_hotplug.rst
+++ b/Documentation/core-api/cpu_hotplug.rst
@@ -35,8 +35,8 @@
   other CPUs later online.
 
 ``nr_cpus=n``
-  Restrict the total amount CPUs the kernel will support. If the number
-  supplied here is lower than the number of physically available CPUs than
+  Restrict the total amount of CPUs the kernel will support. If the number
+  supplied here is lower than the number of physically available CPUs, then
   those CPUs can not be brought online later.
 
 ``additional_cpus=n``
@@ -50,13 +50,6 @@
 
   This option is limited to the X86 and S390 architecture.
 
-``cede_offline={"off","on"}``
-  Use this option to disable/enable putting offlined processors to an extended
-  ``H_CEDE`` state on supported pseries platforms. If nothing is specified,
-  ``cede_offline`` is set to "on".
-
-  This option is limited to the PowerPC architecture.
-
 ``cpu0_hotplug``
   Allow to shutdown CPU0.
 
diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
index f416204..3b3abbb 100644
--- a/Documentation/core-api/dma-api.rst
+++ b/Documentation/core-api/dma-api.rst
@@ -5,7 +5,7 @@
 :Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
 
 This document describes the DMA API.  For a more gentle introduction
-of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt.
+of the API (and actual examples), see :doc:`/core-api/dma-api-howto`.
 
 This API is split into two pieces.  Part I describes the basic API.
 Part II describes extensions for supporting non-consistent memory
@@ -479,7 +479,7 @@
 dma_attrs.
 
 The interpretation of DMA attributes is architecture-specific, and
-each attribute should be documented in Documentation/DMA-attributes.txt.
+each attribute should be documented in :doc:`/core-api/dma-attributes`.
 
 If dma_attrs are 0, the semantics of each of these functions
 is identical to those of the corresponding function
@@ -492,7 +492,7 @@
 
 	#include <linux/dma-mapping.h>
 	/* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and
-	* documented in Documentation/DMA-attributes.txt */
+	* documented in Documentation/core-api/dma-attributes.rst */
 	...
 
 		unsigned long attr;
diff --git a/Documentation/core-api/dma-isa-lpc.rst b/Documentation/core-api/dma-isa-lpc.rst
index b1ec7b1..e59a3d3 100644
--- a/Documentation/core-api/dma-isa-lpc.rst
+++ b/Documentation/core-api/dma-isa-lpc.rst
@@ -17,7 +17,7 @@
 	#include <asm/dma.h>
 
 The first is the generic DMA API used to convert virtual addresses to
-bus addresses (see Documentation/DMA-API.txt for details).
+bus addresses (see :doc:`/core-api/dma-api` for details).
 
 The second contains the routines specific to ISA DMA transfers. Since
 this is not present on all platforms make sure you construct your
diff --git a/Documentation/core-api/idr.rst b/Documentation/core-api/idr.rst
index a273805..2eb5afd 100644
--- a/Documentation/core-api/idr.rst
+++ b/Documentation/core-api/idr.rst
@@ -20,48 +20,48 @@
 IDR usage
 =========
 
-Start by initialising an IDR, either with :c:func:`DEFINE_IDR`
-for statically allocated IDRs or :c:func:`idr_init` for dynamically
+Start by initialising an IDR, either with DEFINE_IDR()
+for statically allocated IDRs or idr_init() for dynamically
 allocated IDRs.
 
-You can call :c:func:`idr_alloc` to allocate an unused ID.  Look up
-the pointer you associated with the ID by calling :c:func:`idr_find`
-and free the ID by calling :c:func:`idr_remove`.
+You can call idr_alloc() to allocate an unused ID.  Look up
+the pointer you associated with the ID by calling idr_find()
+and free the ID by calling idr_remove().
 
 If you need to change the pointer associated with an ID, you can call
-:c:func:`idr_replace`.  One common reason to do this is to reserve an
+idr_replace().  One common reason to do this is to reserve an
 ID by passing a ``NULL`` pointer to the allocation function; initialise the
 object with the reserved ID and finally insert the initialised object
 into the IDR.
 
 Some users need to allocate IDs larger than ``INT_MAX``.  So far all of
 these users have been content with a ``UINT_MAX`` limit, and they use
-:c:func:`idr_alloc_u32`.  If you need IDs that will not fit in a u32,
+idr_alloc_u32().  If you need IDs that will not fit in a u32,
 we will work with you to address your needs.
 
 If you need to allocate IDs sequentially, you can use
-:c:func:`idr_alloc_cyclic`.  The IDR becomes less efficient when dealing
+idr_alloc_cyclic().  The IDR becomes less efficient when dealing
 with larger IDs, so using this function comes at a slight cost.
 
 To perform an action on all pointers used by the IDR, you can
-either use the callback-based :c:func:`idr_for_each` or the
-iterator-style :c:func:`idr_for_each_entry`.  You may need to use
-:c:func:`idr_for_each_entry_continue` to continue an iteration.  You can
-also use :c:func:`idr_get_next` if the iterator doesn't fit your needs.
+either use the callback-based idr_for_each() or the
+iterator-style idr_for_each_entry().  You may need to use
+idr_for_each_entry_continue() to continue an iteration.  You can
+also use idr_get_next() if the iterator doesn't fit your needs.
 
-When you have finished using an IDR, you can call :c:func:`idr_destroy`
+When you have finished using an IDR, you can call idr_destroy()
 to release the memory used by the IDR.  This will not free the objects
 pointed to from the IDR; if you want to do that, use one of the iterators
 to do it.
 
-You can use :c:func:`idr_is_empty` to find out whether there are any
+You can use idr_is_empty() to find out whether there are any
 IDs currently allocated.
 
 If you need to take a lock while allocating a new ID from the IDR,
 you may need to pass a restrictive set of GFP flags, which can lead
 to the IDR being unable to allocate memory.  To work around this,
-you can call :c:func:`idr_preload` before taking the lock, and then
-:c:func:`idr_preload_end` after the allocation.
+you can call idr_preload() before taking the lock, and then
+idr_preload_end() after the allocation.
 
 .. kernel-doc:: include/linux/idr.h
    :doc: idr sync
diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst
index 15ab861..69171b1 100644
--- a/Documentation/core-api/index.rst
+++ b/Documentation/core-api/index.rst
@@ -39,6 +39,8 @@
    rbtree
    generic-radix-tree
    packing
+   bus-virt-phys-mapping
+   this_cpu_ops
    timekeeping
    errseq
 
@@ -82,6 +84,7 @@
    :maxdepth: 1
 
    memory-allocation
+   unaligned-memory-access
    dma-api
    dma-api-howto
    dma-attributes
diff --git a/Documentation/core-api/kobject.rst b/Documentation/core-api/kobject.rst
index e93dc8c..2739f8b 100644
--- a/Documentation/core-api/kobject.rst
+++ b/Documentation/core-api/kobject.rst
@@ -6,7 +6,7 @@
 :Last updated: December 19, 2007
 
 Based on an original article by Jon Corbet for lwn.net written October 1,
-2003 and located at http://lwn.net/Articles/51437/
+2003 and located at https://lwn.net/Articles/51437/
 
 Part of the difficulty in understanding the driver model - and the kobject
 abstraction upon which it is built - is that there is no obvious starting
diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst
index 4aa82dd..4446a1a 100644
--- a/Documentation/core-api/memory-allocation.rst
+++ b/Documentation/core-api/memory-allocation.rst
@@ -84,6 +84,50 @@
 And even with hardware with restrictions it is preferable to use
 `dma_alloc*` APIs.
 
+GFP flags and reclaim behavior
+------------------------------
+Memory allocations may trigger direct or background reclaim and it is
+useful to understand how hard the page allocator will try to satisfy that
+or another request.
+
+  * ``GFP_KERNEL & ~__GFP_RECLAIM`` - optimistic allocation without _any_
+    attempt to free memory at all. The most light weight mode which even
+    doesn't kick the background reclaim. Should be used carefully because it
+    might deplete the memory and the next user might hit the more aggressive
+    reclaim.
+
+  * ``GFP_KERNEL & ~__GFP_DIRECT_RECLAIM`` (or ``GFP_NOWAIT``)- optimistic
+    allocation without any attempt to free memory from the current
+    context but can wake kswapd to reclaim memory if the zone is below
+    the low watermark. Can be used from either atomic contexts or when
+    the request is a performance optimization and there is another
+    fallback for a slow path.
+
+  * ``(GFP_KERNEL|__GFP_HIGH) & ~__GFP_DIRECT_RECLAIM`` (aka ``GFP_ATOMIC``) -
+    non sleeping allocation with an expensive fallback so it can access
+    some portion of memory reserves. Usually used from interrupt/bottom-half
+    context with an expensive slow path fallback.
+
+  * ``GFP_KERNEL`` - both background and direct reclaim are allowed and the
+    **default** page allocator behavior is used. That means that not costly
+    allocation requests are basically no-fail but there is no guarantee of
+    that behavior so failures have to be checked properly by callers
+    (e.g. OOM killer victim is allowed to fail currently).
+
+  * ``GFP_KERNEL | __GFP_NORETRY`` - overrides the default allocator behavior
+    and all allocation requests fail early rather than cause disruptive
+    reclaim (one round of reclaim in this implementation). The OOM killer
+    is not invoked.
+
+  * ``GFP_KERNEL | __GFP_RETRY_MAYFAIL`` - overrides the default allocator
+    behavior and all allocation requests try really hard. The request
+    will fail if the reclaim cannot make any progress. The OOM killer
+    won't be triggered.
+
+  * ``GFP_KERNEL | __GFP_NOFAIL`` - overrides the default allocator behavior
+    and all allocation requests will loop endlessly until they succeed.
+    This might be really dangerous especially for larger orders.
+
 Selecting memory allocator
 ==========================
 
diff --git a/Documentation/core-api/padata.rst b/Documentation/core-api/padata.rst
index 0830e5b..3517571 100644
--- a/Documentation/core-api/padata.rst
+++ b/Documentation/core-api/padata.rst
@@ -27,22 +27,11 @@
 
     #include <linux/padata.h>
 
-    struct padata_instance *padata_alloc_possible(const char *name);
+    struct padata_instance *padata_alloc(const char *name);
 
 'name' simply identifies the instance.
 
-There are functions for enabling and disabling the instance::
-
-    int padata_start(struct padata_instance *pinst);
-    void padata_stop(struct padata_instance *pinst);
-
-These functions are setting or clearing the "PADATA_INIT" flag; if that flag is
-not set, other functions will refuse to work.  padata_start() returns zero on
-success (flag set) or -EINVAL if the padata cpumask contains no active CPU
-(flag not set).  padata_stop() clears the flag and blocks until the padata
-instance is unused.
-
-Finally, complete padata initialization by allocating a padata_shell::
+Then, complete padata initialization by allocating a padata_shell::
 
    struct padata_shell *padata_alloc_shell(struct padata_instance *pinst);
 
@@ -155,11 +144,10 @@
 Destroying
 ----------
 
-Cleaning up a padata instance predictably involves calling the three free
+Cleaning up a padata instance predictably involves calling the two free
 functions that correspond to the allocation in reverse::
 
     void padata_free_shell(struct padata_shell *ps);
-    void padata_stop(struct padata_instance *pinst);
     void padata_free(struct padata_instance *pinst);
 
 It is the user's responsibility to ensure all outstanding jobs are complete
diff --git a/Documentation/core-api/printk-basics.rst b/Documentation/core-api/printk-basics.rst
index 563a9ce..965e428 100644
--- a/Documentation/core-api/printk-basics.rst
+++ b/Documentation/core-api/printk-basics.rst
@@ -69,7 +69,7 @@
 The result shows the *current*, *default*, *minimum* and *boot-time-default* log
 levels.
 
-To change the current console_loglevel simply write the the desired level to
+To change the current console_loglevel simply write the desired level to
 ``/proc/sys/kernel/printk``. For example, to print all messages to the console::
 
   # echo 8 > /proc/sys/kernel/printk
diff --git a/Documentation/core-api/printk-formats.rst b/Documentation/core-api/printk-formats.rst
index 8c9aba2..6d26c5c 100644
--- a/Documentation/core-api/printk-formats.rst
+++ b/Documentation/core-api/printk-formats.rst
@@ -317,7 +317,7 @@
 
 The additional ``c`` specifier can be used with the ``I`` specifier to
 print a compressed IPv6 address as described by
-http://tools.ietf.org/html/rfc5952
+https://tools.ietf.org/html/rfc5952
 
 Passed by reference.
 
@@ -341,7 +341,7 @@
 flowinfo a ``/`` and scope a ``%``, each followed by the actual value.
 
 In case of an IPv6 address the compressed IPv6 address as described by
-http://tools.ietf.org/html/rfc5952 is being used if the additional
+https://tools.ietf.org/html/rfc5952 is being used if the additional
 specifier ``c`` is given. The IPv6 address is surrounded by ``[``, ``]`` in
 case of additional specifiers ``p``, ``f`` or ``s`` as suggested by
 https://tools.ietf.org/html/draft-ietf-6man-text-addr-representation-07
@@ -494,9 +494,11 @@
 	%pt[RT]t		HH:MM:SS
 	%pt[RT][dt][r]
 
-For printing date and time as represented by
+For printing date and time as represented by::
+
 	R  struct rtc_time structure
 	T  time64_t type
+
 in human readable format.
 
 By default year will be incremented by 1900 and month by 1.
diff --git a/Documentation/this_cpu_ops.txt b/Documentation/core-api/this_cpu_ops.rst
similarity index 100%
rename from Documentation/this_cpu_ops.txt
rename to Documentation/core-api/this_cpu_ops.rst
diff --git a/Documentation/process/unaligned-memory-access.rst b/Documentation/core-api/unaligned-memory-access.rst
similarity index 100%
rename from Documentation/process/unaligned-memory-access.rst
rename to Documentation/core-api/unaligned-memory-access.rst
diff --git a/Documentation/crypto/api-intro.rst b/Documentation/crypto/api-intro.rst
new file mode 100644
index 0000000..15201be
--- /dev/null
+++ b/Documentation/crypto/api-intro.rst
@@ -0,0 +1,262 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============================
+Scatterlist Cryptographic API
+=============================
+
+Introduction
+============
+
+The Scatterlist Crypto API takes page vectors (scatterlists) as
+arguments, and works directly on pages.  In some cases (e.g. ECB
+mode ciphers), this will allow for pages to be encrypted in-place
+with no copying.
+
+One of the initial goals of this design was to readily support IPsec,
+so that processing can be applied to paged skb's without the need
+for linearization.
+
+
+Details
+=======
+
+At the lowest level are algorithms, which register dynamically with the
+API.
+
+'Transforms' are user-instantiated objects, which maintain state, handle all
+of the implementation logic (e.g. manipulating page vectors) and provide an
+abstraction to the underlying algorithms.  However, at the user
+level they are very simple.
+
+Conceptually, the API layering looks like this::
+
+  [transform api]  (user interface)
+  [transform ops]  (per-type logic glue e.g. cipher.c, compress.c)
+  [algorithm api]  (for registering algorithms)
+
+The idea is to make the user interface and algorithm registration API
+very simple, while hiding the core logic from both.  Many good ideas
+from existing APIs such as Cryptoapi and Nettle have been adapted for this.
+
+The API currently supports five main types of transforms: AEAD (Authenticated
+Encryption with Associated Data), Block Ciphers, Ciphers, Compressors and
+Hashes.
+
+Please note that Block Ciphers is somewhat of a misnomer.  It is in fact
+meant to support all ciphers including stream ciphers.  The difference
+between Block Ciphers and Ciphers is that the latter operates on exactly
+one block while the former can operate on an arbitrary amount of data,
+subject to block size requirements (i.e., non-stream ciphers can only
+process multiples of blocks).
+
+Here's an example of how to use the API::
+
+	#include <crypto/hash.h>
+	#include <linux/err.h>
+	#include <linux/scatterlist.h>
+
+	struct scatterlist sg[2];
+	char result[128];
+	struct crypto_ahash *tfm;
+	struct ahash_request *req;
+
+	tfm = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
+	if (IS_ERR(tfm))
+		fail();
+
+	/* ... set up the scatterlists ... */
+
+	req = ahash_request_alloc(tfm, GFP_ATOMIC);
+	if (!req)
+		fail();
+
+	ahash_request_set_callback(req, 0, NULL, NULL);
+	ahash_request_set_crypt(req, sg, result, 2);
+
+	if (crypto_ahash_digest(req))
+		fail();
+
+	ahash_request_free(req);
+	crypto_free_ahash(tfm);
+
+
+Many real examples are available in the regression test module (tcrypt.c).
+
+
+Developer Notes
+===============
+
+Transforms may only be allocated in user context, and cryptographic
+methods may only be called from softirq and user contexts.  For
+transforms with a setkey method it too should only be called from
+user context.
+
+When using the API for ciphers, performance will be optimal if each
+scatterlist contains data which is a multiple of the cipher's block
+size (typically 8 bytes).  This prevents having to do any copying
+across non-aligned page fragment boundaries.
+
+
+Adding New Algorithms
+=====================
+
+When submitting a new algorithm for inclusion, a mandatory requirement
+is that at least a few test vectors from known sources (preferably
+standards) be included.
+
+Converting existing well known code is preferred, as it is more likely
+to have been reviewed and widely tested.  If submitting code from LGPL
+sources, please consider changing the license to GPL (see section 3 of
+the LGPL).
+
+Algorithms submitted must also be generally patent-free (e.g. IDEA
+will not be included in the mainline until around 2011), and be based
+on a recognized standard and/or have been subjected to appropriate
+peer review.
+
+Also check for any RFCs which may relate to the use of specific algorithms,
+as well as general application notes such as RFC2451 ("The ESP CBC-Mode
+Cipher Algorithms").
+
+It's a good idea to avoid using lots of macros and use inlined functions
+instead, as gcc does a good job with inlining, while excessive use of
+macros can cause compilation problems on some platforms.
+
+Also check the TODO list at the web site listed below to see what people
+might already be working on.
+
+
+Bugs
+====
+
+Send bug reports to:
+    linux-crypto@vger.kernel.org
+
+Cc:
+    Herbert Xu <herbert@gondor.apana.org.au>,
+    David S. Miller <davem@redhat.com>
+
+
+Further Information
+===================
+
+For further patches and various updates, including the current TODO
+list, see:
+http://gondor.apana.org.au/~herbert/crypto/
+
+
+Authors
+=======
+
+- James Morris
+- David S. Miller
+- Herbert Xu
+
+
+Credits
+=======
+
+The following people provided invaluable feedback during the development
+of the API:
+
+  - Alexey Kuznetzov
+  - Rusty Russell
+  - Herbert Valerio Riedel
+  - Jeff Garzik
+  - Michael Richardson
+  - Andrew Morton
+  - Ingo Oeser
+  - Christoph Hellwig
+
+Portions of this API were derived from the following projects:
+
+  Kerneli Cryptoapi (http://www.kerneli.org/)
+   - Alexander Kjeldaas
+   - Herbert Valerio Riedel
+   - Kyle McMartin
+   - Jean-Luc Cooke
+   - David Bryson
+   - Clemens Fruhwirth
+   - Tobias Ringstrom
+   - Harald Welte
+
+and;
+
+  Nettle (https://www.lysator.liu.se/~nisse/nettle/)
+   - Niels Möller
+
+Original developers of the crypto algorithms:
+
+  - Dana L. How (DES)
+  - Andrew Tridgell and Steve French (MD4)
+  - Colin Plumb (MD5)
+  - Steve Reid (SHA1)
+  - Jean-Luc Cooke (SHA256, SHA384, SHA512)
+  - Kazunori Miyazawa / USAGI (HMAC)
+  - Matthew Skala (Twofish)
+  - Dag Arne Osvik (Serpent)
+  - Brian Gladman (AES)
+  - Kartikey Mahendra Bhatt (CAST6)
+  - Jon Oberheide (ARC4)
+  - Jouni Malinen (Michael MIC)
+  - NTT(Nippon Telegraph and Telephone Corporation) (Camellia)
+
+SHA1 algorithm contributors:
+  - Jean-Francois Dive
+
+DES algorithm contributors:
+  - Raimar Falke
+  - Gisle Sælensminde
+  - Niels Möller
+
+Blowfish algorithm contributors:
+  - Herbert Valerio Riedel
+  - Kyle McMartin
+
+Twofish algorithm contributors:
+  - Werner Koch
+  - Marc Mutz
+
+SHA256/384/512 algorithm contributors:
+  - Andrew McDonald
+  - Kyle McMartin
+  - Herbert Valerio Riedel
+
+AES algorithm contributors:
+  - Alexander Kjeldaas
+  - Herbert Valerio Riedel
+  - Kyle McMartin
+  - Adam J. Richter
+  - Fruhwirth Clemens (i586)
+  - Linus Torvalds (i586)
+
+CAST5 algorithm contributors:
+  - Kartikey Mahendra Bhatt (original developers unknown, FSF copyright).
+
+TEA/XTEA algorithm contributors:
+  - Aaron Grothe
+  - Michael Ringe
+
+Khazad algorithm contributors:
+  - Aaron Grothe
+
+Whirlpool algorithm contributors:
+  - Aaron Grothe
+  - Jean-Luc Cooke
+
+Anubis algorithm contributors:
+  - Aaron Grothe
+
+Tiger algorithm contributors:
+  - Aaron Grothe
+
+VIA PadLock contributors:
+  - Michal Ludvig
+
+Camellia algorithm contributors:
+  - NTT(Nippon Telegraph and Telephone Corporation) (Camellia)
+
+Generic scatterwalk code by Adam J. Richter <adam@yggdrasil.com>
+
+Please send any credits updates or corrections to:
+Herbert Xu <herbert@gondor.apana.org.au>
diff --git a/Documentation/crypto/api-intro.txt b/Documentation/crypto/api-intro.txt
deleted file mode 100644
index 45d943f..0000000
--- a/Documentation/crypto/api-intro.txt
+++ /dev/null
@@ -1,250 +0,0 @@
-
-                    Scatterlist Cryptographic API
-                   
-INTRODUCTION
-
-The Scatterlist Crypto API takes page vectors (scatterlists) as
-arguments, and works directly on pages.  In some cases (e.g. ECB
-mode ciphers), this will allow for pages to be encrypted in-place
-with no copying.
-
-One of the initial goals of this design was to readily support IPsec,
-so that processing can be applied to paged skb's without the need
-for linearization.
-
-
-DETAILS
-
-At the lowest level are algorithms, which register dynamically with the
-API.
-
-'Transforms' are user-instantiated objects, which maintain state, handle all
-of the implementation logic (e.g. manipulating page vectors) and provide an 
-abstraction to the underlying algorithms.  However, at the user 
-level they are very simple.
-
-Conceptually, the API layering looks like this:
-
-  [transform api]  (user interface)
-  [transform ops]  (per-type logic glue e.g. cipher.c, compress.c)
-  [algorithm api]  (for registering algorithms)
-  
-The idea is to make the user interface and algorithm registration API
-very simple, while hiding the core logic from both.  Many good ideas
-from existing APIs such as Cryptoapi and Nettle have been adapted for this.
-
-The API currently supports five main types of transforms: AEAD (Authenticated
-Encryption with Associated Data), Block Ciphers, Ciphers, Compressors and
-Hashes.
-
-Please note that Block Ciphers is somewhat of a misnomer.  It is in fact
-meant to support all ciphers including stream ciphers.  The difference
-between Block Ciphers and Ciphers is that the latter operates on exactly
-one block while the former can operate on an arbitrary amount of data,
-subject to block size requirements (i.e., non-stream ciphers can only
-process multiples of blocks).
-
-Here's an example of how to use the API:
-
-	#include <crypto/hash.h>
-	#include <linux/err.h>
-	#include <linux/scatterlist.h>
-	
-	struct scatterlist sg[2];
-	char result[128];
-	struct crypto_ahash *tfm;
-	struct ahash_request *req;
-	
-	tfm = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
-	if (IS_ERR(tfm))
-		fail();
-		
-	/* ... set up the scatterlists ... */
-
-	req = ahash_request_alloc(tfm, GFP_ATOMIC);
-	if (!req)
-		fail();
-
-	ahash_request_set_callback(req, 0, NULL, NULL);
-	ahash_request_set_crypt(req, sg, result, 2);
-	
-	if (crypto_ahash_digest(req))
-		fail();
-
-	ahash_request_free(req);
-	crypto_free_ahash(tfm);
-
-    
-Many real examples are available in the regression test module (tcrypt.c).
-
-
-DEVELOPER NOTES
-
-Transforms may only be allocated in user context, and cryptographic
-methods may only be called from softirq and user contexts.  For
-transforms with a setkey method it too should only be called from
-user context.
-
-When using the API for ciphers, performance will be optimal if each
-scatterlist contains data which is a multiple of the cipher's block
-size (typically 8 bytes).  This prevents having to do any copying
-across non-aligned page fragment boundaries.
-
-
-ADDING NEW ALGORITHMS
-
-When submitting a new algorithm for inclusion, a mandatory requirement
-is that at least a few test vectors from known sources (preferably
-standards) be included.
-
-Converting existing well known code is preferred, as it is more likely
-to have been reviewed and widely tested.  If submitting code from LGPL
-sources, please consider changing the license to GPL (see section 3 of
-the LGPL).
-
-Algorithms submitted must also be generally patent-free (e.g. IDEA
-will not be included in the mainline until around 2011), and be based
-on a recognized standard and/or have been subjected to appropriate
-peer review.
-
-Also check for any RFCs which may relate to the use of specific algorithms,
-as well as general application notes such as RFC2451 ("The ESP CBC-Mode
-Cipher Algorithms").
-
-It's a good idea to avoid using lots of macros and use inlined functions
-instead, as gcc does a good job with inlining, while excessive use of
-macros can cause compilation problems on some platforms.
-
-Also check the TODO list at the web site listed below to see what people
-might already be working on.
-
-
-BUGS
-
-Send bug reports to:
-linux-crypto@vger.kernel.org
-Cc: Herbert Xu <herbert@gondor.apana.org.au>,
-    David S. Miller <davem@redhat.com>
-
-
-FURTHER INFORMATION
-
-For further patches and various updates, including the current TODO
-list, see:
-http://gondor.apana.org.au/~herbert/crypto/
-
-
-AUTHORS
-
-James Morris
-David S. Miller
-Herbert Xu
-
-
-CREDITS
-
-The following people provided invaluable feedback during the development
-of the API:
-
-  Alexey Kuznetzov
-  Rusty Russell
-  Herbert Valerio Riedel
-  Jeff Garzik
-  Michael Richardson
-  Andrew Morton
-  Ingo Oeser
-  Christoph Hellwig
-
-Portions of this API were derived from the following projects:
-  
-  Kerneli Cryptoapi (http://www.kerneli.org/)
-    Alexander Kjeldaas
-    Herbert Valerio Riedel
-    Kyle McMartin
-    Jean-Luc Cooke
-    David Bryson
-    Clemens Fruhwirth
-    Tobias Ringstrom
-    Harald Welte
-
-and;
-  
-  Nettle (http://www.lysator.liu.se/~nisse/nettle/)
-    Niels Möller
-
-Original developers of the crypto algorithms:
-
-  Dana L. How (DES)
-  Andrew Tridgell and Steve French (MD4)
-  Colin Plumb (MD5)
-  Steve Reid (SHA1)
-  Jean-Luc Cooke (SHA256, SHA384, SHA512)
-  Kazunori Miyazawa / USAGI (HMAC)
-  Matthew Skala (Twofish)
-  Dag Arne Osvik (Serpent)
-  Brian Gladman (AES)
-  Kartikey Mahendra Bhatt (CAST6)
-  Jon Oberheide (ARC4)
-  Jouni Malinen (Michael MIC)
-  NTT(Nippon Telegraph and Telephone Corporation) (Camellia)
-
-SHA1 algorithm contributors:
-  Jean-Francois Dive
-  
-DES algorithm contributors:
-  Raimar Falke
-  Gisle Sælensminde
-  Niels Möller
-
-Blowfish algorithm contributors:
-  Herbert Valerio Riedel
-  Kyle McMartin
-
-Twofish algorithm contributors:
-  Werner Koch
-  Marc Mutz
-
-SHA256/384/512 algorithm contributors:
-  Andrew McDonald
-  Kyle McMartin
-  Herbert Valerio Riedel
-  
-AES algorithm contributors:
-  Alexander Kjeldaas
-  Herbert Valerio Riedel
-  Kyle McMartin
-  Adam J. Richter
-  Fruhwirth Clemens (i586)
-  Linus Torvalds (i586)
-
-CAST5 algorithm contributors:
-  Kartikey Mahendra Bhatt (original developers unknown, FSF copyright).
-
-TEA/XTEA algorithm contributors:
-  Aaron Grothe
-  Michael Ringe
-
-Khazad algorithm contributors:
-  Aaron Grothe
-
-Whirlpool algorithm contributors:
-  Aaron Grothe
-  Jean-Luc Cooke
-
-Anubis algorithm contributors:
-  Aaron Grothe
-
-Tiger algorithm contributors:
-  Aaron Grothe
-
-VIA PadLock contributors:
-  Michal Ludvig
-
-Camellia algorithm contributors:
-  NTT(Nippon Telegraph and Telephone Corporation) (Camellia)
-
-Generic scatterwalk code by Adam J. Richter <adam@yggdrasil.com>
-
-Please send any credits updates or corrections to:
-Herbert Xu <herbert@gondor.apana.org.au>
-
diff --git a/Documentation/crypto/asymmetric-keys.rst b/Documentation/crypto/asymmetric-keys.rst
new file mode 100644
index 0000000..349f44a
--- /dev/null
+++ b/Documentation/crypto/asymmetric-keys.rst
@@ -0,0 +1,424 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============================================
+Asymmetric / Public-key Cryptography Key Type
+=============================================
+
+.. Contents:
+
+  - Overview.
+  - Key identification.
+  - Accessing asymmetric keys.
+    - Signature verification.
+  - Asymmetric key subtypes.
+  - Instantiation data parsers.
+  - Keyring link restrictions.
+
+
+Overview
+========
+
+The "asymmetric" key type is designed to be a container for the keys used in
+public-key cryptography, without imposing any particular restrictions on the
+form or mechanism of the cryptography or form of the key.
+
+The asymmetric key is given a subtype that defines what sort of data is
+associated with the key and provides operations to describe and destroy it.
+However, no requirement is made that the key data actually be stored in the
+key.
+
+A completely in-kernel key retention and operation subtype can be defined, but
+it would also be possible to provide access to cryptographic hardware (such as
+a TPM) that might be used to both retain the relevant key and perform
+operations using that key.  In such a case, the asymmetric key would then
+merely be an interface to the TPM driver.
+
+Also provided is the concept of a data parser.  Data parsers are responsible
+for extracting information from the blobs of data passed to the instantiation
+function.  The first data parser that recognises the blob gets to set the
+subtype of the key and define the operations that can be done on that key.
+
+A data parser may interpret the data blob as containing the bits representing a
+key, or it may interpret it as a reference to a key held somewhere else in the
+system (for example, a TPM).
+
+
+Key Identification
+==================
+
+If a key is added with an empty name, the instantiation data parsers are given
+the opportunity to pre-parse a key and to determine the description the key
+should be given from the content of the key.
+
+This can then be used to refer to the key, either by complete match or by
+partial match.  The key type may also use other criteria to refer to a key.
+
+The asymmetric key type's match function can then perform a wider range of
+comparisons than just the straightforward comparison of the description with
+the criterion string:
+
+  1) If the criterion string is of the form "id:<hexdigits>" then the match
+     function will examine a key's fingerprint to see if the hex digits given
+     after the "id:" match the tail.  For instance::
+
+	keyctl search @s asymmetric id:5acc2142
+
+     will match a key with fingerprint::
+
+	1A00 2040 7601 7889 DE11  882C 3823 04AD 5ACC 2142
+
+  2) If the criterion string is of the form "<subtype>:<hexdigits>" then the
+     match will match the ID as in (1), but with the added restriction that
+     only keys of the specified subtype (e.g. tpm) will be matched.  For
+     instance::
+
+	keyctl search @s asymmetric tpm:5acc2142
+
+Looking in /proc/keys, the last 8 hex digits of the key fingerprint are
+displayed, along with the subtype::
+
+	1a39e171 I-----     1 perm 3f010000     0     0 asymmetric modsign.0: DSA 5acc2142 []
+
+
+Accessing Asymmetric Keys
+=========================
+
+For general access to asymmetric keys from within the kernel, the following
+inclusion is required::
+
+	#include <crypto/public_key.h>
+
+This gives access to functions for dealing with asymmetric / public keys.
+Three enums are defined there for representing public-key cryptography
+algorithms::
+
+	enum pkey_algo
+
+digest algorithms used by those::
+
+	enum pkey_hash_algo
+
+and key identifier representations::
+
+	enum pkey_id_type
+
+Note that the key type representation types are required because key
+identifiers from different standards aren't necessarily compatible.  For
+instance, PGP generates key identifiers by hashing the key data plus some
+PGP-specific metadata, whereas X.509 has arbitrary certificate identifiers.
+
+The operations defined upon a key are:
+
+  1) Signature verification.
+
+Other operations are possible (such as encryption) with the same key data
+required for verification, but not currently supported, and others
+(eg. decryption and signature generation) require extra key data.
+
+
+Signature Verification
+----------------------
+
+An operation is provided to perform cryptographic signature verification, using
+an asymmetric key to provide or to provide access to the public key::
+
+	int verify_signature(const struct key *key,
+			     const struct public_key_signature *sig);
+
+The caller must have already obtained the key from some source and can then use
+it to check the signature.  The caller must have parsed the signature and
+transferred the relevant bits to the structure pointed to by sig::
+
+	struct public_key_signature {
+		u8 *digest;
+		u8 digest_size;
+		enum pkey_hash_algo pkey_hash_algo : 8;
+		u8 nr_mpi;
+		union {
+			MPI mpi[2];
+			...
+		};
+	};
+
+The algorithm used must be noted in sig->pkey_hash_algo, and all the MPIs that
+make up the actual signature must be stored in sig->mpi[] and the count of MPIs
+placed in sig->nr_mpi.
+
+In addition, the data must have been digested by the caller and the resulting
+hash must be pointed to by sig->digest and the size of the hash be placed in
+sig->digest_size.
+
+The function will return 0 upon success or -EKEYREJECTED if the signature
+doesn't match.
+
+The function may also return -ENOTSUPP if an unsupported public-key algorithm
+or public-key/hash algorithm combination is specified or the key doesn't
+support the operation; -EBADMSG or -ERANGE if some of the parameters have weird
+data; or -ENOMEM if an allocation can't be performed.  -EINVAL can be returned
+if the key argument is the wrong type or is incompletely set up.
+
+
+Asymmetric Key Subtypes
+=======================
+
+Asymmetric keys have a subtype that defines the set of operations that can be
+performed on that key and that determines what data is attached as the key
+payload.  The payload format is entirely at the whim of the subtype.
+
+The subtype is selected by the key data parser and the parser must initialise
+the data required for it.  The asymmetric key retains a reference on the
+subtype module.
+
+The subtype definition structure can be found in::
+
+	#include <keys/asymmetric-subtype.h>
+
+and looks like the following::
+
+	struct asymmetric_key_subtype {
+		struct module		*owner;
+		const char		*name;
+
+		void (*describe)(const struct key *key, struct seq_file *m);
+		void (*destroy)(void *payload);
+		int (*query)(const struct kernel_pkey_params *params,
+			     struct kernel_pkey_query *info);
+		int (*eds_op)(struct kernel_pkey_params *params,
+			      const void *in, void *out);
+		int (*verify_signature)(const struct key *key,
+					const struct public_key_signature *sig);
+	};
+
+Asymmetric keys point to this with their payload[asym_subtype] member.
+
+The owner and name fields should be set to the owning module and the name of
+the subtype.  Currently, the name is only used for print statements.
+
+There are a number of operations defined by the subtype:
+
+  1) describe().
+
+     Mandatory.  This allows the subtype to display something in /proc/keys
+     against the key.  For instance the name of the public key algorithm type
+     could be displayed.  The key type will display the tail of the key
+     identity string after this.
+
+  2) destroy().
+
+     Mandatory.  This should free the memory associated with the key.  The
+     asymmetric key will look after freeing the fingerprint and releasing the
+     reference on the subtype module.
+
+  3) query().
+
+     Mandatory.  This is a function for querying the capabilities of a key.
+
+  4) eds_op().
+
+     Optional.  This is the entry point for the encryption, decryption and
+     signature creation operations (which are distinguished by the operation ID
+     in the parameter struct).  The subtype may do anything it likes to
+     implement an operation, including offloading to hardware.
+
+  5) verify_signature().
+
+     Optional.  This is the entry point for signature verification.  The
+     subtype may do anything it likes to implement an operation, including
+     offloading to hardware.
+
+Instantiation Data Parsers
+==========================
+
+The asymmetric key type doesn't generally want to store or to deal with a raw
+blob of data that holds the key data.  It would have to parse it and error
+check it each time it wanted to use it.  Further, the contents of the blob may
+have various checks that can be performed on it (eg. self-signatures, validity
+dates) and may contain useful data about the key (identifiers, capabilities).
+
+Also, the blob may represent a pointer to some hardware containing the key
+rather than the key itself.
+
+Examples of blob formats for which parsers could be implemented include:
+
+ - OpenPGP packet stream [RFC 4880].
+ - X.509 ASN.1 stream.
+ - Pointer to TPM key.
+ - Pointer to UEFI key.
+ - PKCS#8 private key [RFC 5208].
+ - PKCS#5 encrypted private key [RFC 2898].
+
+During key instantiation each parser in the list is tried until one doesn't
+return -EBADMSG.
+
+The parser definition structure can be found in::
+
+	#include <keys/asymmetric-parser.h>
+
+and looks like the following::
+
+	struct asymmetric_key_parser {
+		struct module	*owner;
+		const char	*name;
+
+		int (*parse)(struct key_preparsed_payload *prep);
+	};
+
+The owner and name fields should be set to the owning module and the name of
+the parser.
+
+There is currently only a single operation defined by the parser, and it is
+mandatory:
+
+  1) parse().
+
+     This is called to preparse the key from the key creation and update paths.
+     In particular, it is called during the key creation _before_ a key is
+     allocated, and as such, is permitted to provide the key's description in
+     the case that the caller declines to do so.
+
+     The caller passes a pointer to the following struct with all of the fields
+     cleared, except for data, datalen and quotalen [see
+     Documentation/security/keys/core.rst]::
+
+	struct key_preparsed_payload {
+		char		*description;
+		void		*payload[4];
+		const void	*data;
+		size_t		datalen;
+		size_t		quotalen;
+	};
+
+     The instantiation data is in a blob pointed to by data and is datalen in
+     size.  The parse() function is not permitted to change these two values at
+     all, and shouldn't change any of the other values _unless_ they are
+     recognise the blob format and will not return -EBADMSG to indicate it is
+     not theirs.
+
+     If the parser is happy with the blob, it should propose a description for
+     the key and attach it to ->description, ->payload[asym_subtype] should be
+     set to point to the subtype to be used, ->payload[asym_crypto] should be
+     set to point to the initialised data for that subtype,
+     ->payload[asym_key_ids] should point to one or more hex fingerprints and
+     quotalen should be updated to indicate how much quota this key should
+     account for.
+
+     When clearing up, the data attached to ->payload[asym_key_ids] and
+     ->description will be kfree()'d and the data attached to
+     ->payload[asm_crypto] will be passed to the subtype's ->destroy() method
+     to be disposed of.  A module reference for the subtype pointed to by
+     ->payload[asym_subtype] will be put.
+
+
+     If the data format is not recognised, -EBADMSG should be returned.  If it
+     is recognised, but the key cannot for some reason be set up, some other
+     negative error code should be returned.  On success, 0 should be returned.
+
+     The key's fingerprint string may be partially matched upon.  For a
+     public-key algorithm such as RSA and DSA this will likely be a printable
+     hex version of the key's fingerprint.
+
+Functions are provided to register and unregister parsers::
+
+	int register_asymmetric_key_parser(struct asymmetric_key_parser *parser);
+	void unregister_asymmetric_key_parser(struct asymmetric_key_parser *subtype);
+
+Parsers may not have the same name.  The names are otherwise only used for
+displaying in debugging messages.
+
+
+Keyring Link Restrictions
+=========================
+
+Keyrings created from userspace using add_key can be configured to check the
+signature of the key being linked.  Keys without a valid signature are not
+allowed to link.
+
+Several restriction methods are available:
+
+  1) Restrict using the kernel builtin trusted keyring
+
+     - Option string used with KEYCTL_RESTRICT_KEYRING:
+       - "builtin_trusted"
+
+     The kernel builtin trusted keyring will be searched for the signing key.
+     If the builtin trusted keyring is not configured, all links will be
+     rejected.  The ca_keys kernel parameter also affects which keys are used
+     for signature verification.
+
+  2) Restrict using the kernel builtin and secondary trusted keyrings
+
+     - Option string used with KEYCTL_RESTRICT_KEYRING:
+       - "builtin_and_secondary_trusted"
+
+     The kernel builtin and secondary trusted keyrings will be searched for the
+     signing key.  If the secondary trusted keyring is not configured, this
+     restriction will behave like the "builtin_trusted" option.  The ca_keys
+     kernel parameter also affects which keys are used for signature
+     verification.
+
+  3) Restrict using a separate key or keyring
+
+     - Option string used with KEYCTL_RESTRICT_KEYRING:
+       - "key_or_keyring:<key or keyring serial number>[:chain]"
+
+     Whenever a key link is requested, the link will only succeed if the key
+     being linked is signed by one of the designated keys.  This key may be
+     specified directly by providing a serial number for one asymmetric key, or
+     a group of keys may be searched for the signing key by providing the
+     serial number for a keyring.
+
+     When the "chain" option is provided at the end of the string, the keys
+     within the destination keyring will also be searched for signing keys.
+     This allows for verification of certificate chains by adding each
+     certificate in order (starting closest to the root) to a keyring.  For
+     instance, one keyring can be populated with links to a set of root
+     certificates, with a separate, restricted keyring set up for each
+     certificate chain to be validated::
+
+	# Create and populate a keyring for root certificates
+	root_id=`keyctl add keyring root-certs "" @s`
+	keyctl padd asymmetric "" $root_id < root1.cert
+	keyctl padd asymmetric "" $root_id < root2.cert
+
+	# Create and restrict a keyring for the certificate chain
+	chain_id=`keyctl add keyring chain "" @s`
+	keyctl restrict_keyring $chain_id asymmetric key_or_keyring:$root_id:chain
+
+	# Attempt to add each certificate in the chain, starting with the
+	# certificate closest to the root.
+	keyctl padd asymmetric "" $chain_id < intermediateA.cert
+	keyctl padd asymmetric "" $chain_id < intermediateB.cert
+	keyctl padd asymmetric "" $chain_id < end-entity.cert
+
+     If the final end-entity certificate is successfully added to the "chain"
+     keyring, we can be certain that it has a valid signing chain going back to
+     one of the root certificates.
+
+     A single keyring can be used to verify a chain of signatures by
+     restricting the keyring after linking the root certificate::
+
+	# Create a keyring for the certificate chain and add the root
+	chain2_id=`keyctl add keyring chain2 "" @s`
+	keyctl padd asymmetric "" $chain2_id < root1.cert
+
+	# Restrict the keyring that already has root1.cert linked.  The cert
+	# will remain linked by the keyring.
+	keyctl restrict_keyring $chain2_id asymmetric key_or_keyring:0:chain
+
+	# Attempt to add each certificate in the chain, starting with the
+	# certificate closest to the root.
+	keyctl padd asymmetric "" $chain2_id < intermediateA.cert
+	keyctl padd asymmetric "" $chain2_id < intermediateB.cert
+	keyctl padd asymmetric "" $chain2_id < end-entity.cert
+
+     If the final end-entity certificate is successfully added to the "chain2"
+     keyring, we can be certain that there is a valid signing chain going back
+     to the root certificate that was added before the keyring was restricted.
+
+
+In all of these cases, if the signing key is found the signature of the key to
+be linked will be verified using the signing key.  The requested key is added
+to the keyring only if the signature is successfully verified.  -ENOKEY is
+returned if the parent certificate could not be found, or -EKEYREJECTED is
+returned if the signature check fails or the key is blacklisted.  Other errors
+may be returned if the signature check could not be performed.
diff --git a/Documentation/crypto/asymmetric-keys.txt b/Documentation/crypto/asymmetric-keys.txt
deleted file mode 100644
index 8763866..0000000
--- a/Documentation/crypto/asymmetric-keys.txt
+++ /dev/null
@@ -1,429 +0,0 @@
-		=============================================
-		ASYMMETRIC / PUBLIC-KEY CRYPTOGRAPHY KEY TYPE
-		=============================================
-
-Contents:
-
-  - Overview.
-  - Key identification.
-  - Accessing asymmetric keys.
-    - Signature verification.
-  - Asymmetric key subtypes.
-  - Instantiation data parsers.
-  - Keyring link restrictions.
-
-
-========
-OVERVIEW
-========
-
-The "asymmetric" key type is designed to be a container for the keys used in
-public-key cryptography, without imposing any particular restrictions on the
-form or mechanism of the cryptography or form of the key.
-
-The asymmetric key is given a subtype that defines what sort of data is
-associated with the key and provides operations to describe and destroy it.
-However, no requirement is made that the key data actually be stored in the
-key.
-
-A completely in-kernel key retention and operation subtype can be defined, but
-it would also be possible to provide access to cryptographic hardware (such as
-a TPM) that might be used to both retain the relevant key and perform
-operations using that key.  In such a case, the asymmetric key would then
-merely be an interface to the TPM driver.
-
-Also provided is the concept of a data parser.  Data parsers are responsible
-for extracting information from the blobs of data passed to the instantiation
-function.  The first data parser that recognises the blob gets to set the
-subtype of the key and define the operations that can be done on that key.
-
-A data parser may interpret the data blob as containing the bits representing a
-key, or it may interpret it as a reference to a key held somewhere else in the
-system (for example, a TPM).
-
-
-==================
-KEY IDENTIFICATION
-==================
-
-If a key is added with an empty name, the instantiation data parsers are given
-the opportunity to pre-parse a key and to determine the description the key
-should be given from the content of the key.
-
-This can then be used to refer to the key, either by complete match or by
-partial match.  The key type may also use other criteria to refer to a key.
-
-The asymmetric key type's match function can then perform a wider range of
-comparisons than just the straightforward comparison of the description with
-the criterion string:
-
- (1) If the criterion string is of the form "id:<hexdigits>" then the match
-     function will examine a key's fingerprint to see if the hex digits given
-     after the "id:" match the tail.  For instance:
-
-	keyctl search @s asymmetric id:5acc2142
-
-     will match a key with fingerprint:
-
-	1A00 2040 7601 7889 DE11  882C 3823 04AD 5ACC 2142
-
- (2) If the criterion string is of the form "<subtype>:<hexdigits>" then the
-     match will match the ID as in (1), but with the added restriction that
-     only keys of the specified subtype (e.g. tpm) will be matched.  For
-     instance:
-
-	keyctl search @s asymmetric tpm:5acc2142
-
-Looking in /proc/keys, the last 8 hex digits of the key fingerprint are
-displayed, along with the subtype:
-
-	1a39e171 I-----     1 perm 3f010000     0     0 asymmetric modsign.0: DSA 5acc2142 []
-
-
-=========================
-ACCESSING ASYMMETRIC KEYS
-=========================
-
-For general access to asymmetric keys from within the kernel, the following
-inclusion is required:
-
-	#include <crypto/public_key.h>
-
-This gives access to functions for dealing with asymmetric / public keys.
-Three enums are defined there for representing public-key cryptography
-algorithms:
-
-	enum pkey_algo
-
-digest algorithms used by those:
-
-	enum pkey_hash_algo
-
-and key identifier representations:
-
-	enum pkey_id_type
-
-Note that the key type representation types are required because key
-identifiers from different standards aren't necessarily compatible.  For
-instance, PGP generates key identifiers by hashing the key data plus some
-PGP-specific metadata, whereas X.509 has arbitrary certificate identifiers.
-
-The operations defined upon a key are:
-
- (1) Signature verification.
-
-Other operations are possible (such as encryption) with the same key data
-required for verification, but not currently supported, and others
-(eg. decryption and signature generation) require extra key data.
-
-
-SIGNATURE VERIFICATION
-----------------------
-
-An operation is provided to perform cryptographic signature verification, using
-an asymmetric key to provide or to provide access to the public key.
-
-	int verify_signature(const struct key *key,
-			     const struct public_key_signature *sig);
-
-The caller must have already obtained the key from some source and can then use
-it to check the signature.  The caller must have parsed the signature and
-transferred the relevant bits to the structure pointed to by sig.
-
-	struct public_key_signature {
-		u8 *digest;
-		u8 digest_size;
-		enum pkey_hash_algo pkey_hash_algo : 8;
-		u8 nr_mpi;
-		union {
-			MPI mpi[2];
-			...
-		};
-	};
-
-The algorithm used must be noted in sig->pkey_hash_algo, and all the MPIs that
-make up the actual signature must be stored in sig->mpi[] and the count of MPIs
-placed in sig->nr_mpi.
-
-In addition, the data must have been digested by the caller and the resulting
-hash must be pointed to by sig->digest and the size of the hash be placed in
-sig->digest_size.
-
-The function will return 0 upon success or -EKEYREJECTED if the signature
-doesn't match.
-
-The function may also return -ENOTSUPP if an unsupported public-key algorithm
-or public-key/hash algorithm combination is specified or the key doesn't
-support the operation; -EBADMSG or -ERANGE if some of the parameters have weird
-data; or -ENOMEM if an allocation can't be performed.  -EINVAL can be returned
-if the key argument is the wrong type or is incompletely set up.
-
-
-=======================
-ASYMMETRIC KEY SUBTYPES
-=======================
-
-Asymmetric keys have a subtype that defines the set of operations that can be
-performed on that key and that determines what data is attached as the key
-payload.  The payload format is entirely at the whim of the subtype.
-
-The subtype is selected by the key data parser and the parser must initialise
-the data required for it.  The asymmetric key retains a reference on the
-subtype module.
-
-The subtype definition structure can be found in:
-
-	#include <keys/asymmetric-subtype.h>
-
-and looks like the following:
-
-	struct asymmetric_key_subtype {
-		struct module		*owner;
-		const char		*name;
-
-		void (*describe)(const struct key *key, struct seq_file *m);
-		void (*destroy)(void *payload);
-		int (*query)(const struct kernel_pkey_params *params,
-			     struct kernel_pkey_query *info);
-		int (*eds_op)(struct kernel_pkey_params *params,
-			      const void *in, void *out);
-		int (*verify_signature)(const struct key *key,
-					const struct public_key_signature *sig);
-	};
-
-Asymmetric keys point to this with their payload[asym_subtype] member.
-
-The owner and name fields should be set to the owning module and the name of
-the subtype.  Currently, the name is only used for print statements.
-
-There are a number of operations defined by the subtype:
-
- (1) describe().
-
-     Mandatory.  This allows the subtype to display something in /proc/keys
-     against the key.  For instance the name of the public key algorithm type
-     could be displayed.  The key type will display the tail of the key
-     identity string after this.
-
- (2) destroy().
-
-     Mandatory.  This should free the memory associated with the key.  The
-     asymmetric key will look after freeing the fingerprint and releasing the
-     reference on the subtype module.
-
- (3) query().
-
-     Mandatory.  This is a function for querying the capabilities of a key.
-
- (4) eds_op().
-
-     Optional.  This is the entry point for the encryption, decryption and
-     signature creation operations (which are distinguished by the operation ID
-     in the parameter struct).  The subtype may do anything it likes to
-     implement an operation, including offloading to hardware.
-
- (5) verify_signature().
-
-     Optional.  This is the entry point for signature verification.  The
-     subtype may do anything it likes to implement an operation, including
-     offloading to hardware.
-
-
-==========================
-INSTANTIATION DATA PARSERS
-==========================
-
-The asymmetric key type doesn't generally want to store or to deal with a raw
-blob of data that holds the key data.  It would have to parse it and error
-check it each time it wanted to use it.  Further, the contents of the blob may
-have various checks that can be performed on it (eg. self-signatures, validity
-dates) and may contain useful data about the key (identifiers, capabilities).
-
-Also, the blob may represent a pointer to some hardware containing the key
-rather than the key itself.
-
-Examples of blob formats for which parsers could be implemented include:
-
- - OpenPGP packet stream [RFC 4880].
- - X.509 ASN.1 stream.
- - Pointer to TPM key.
- - Pointer to UEFI key.
- - PKCS#8 private key [RFC 5208].
- - PKCS#5 encrypted private key [RFC 2898].
-
-During key instantiation each parser in the list is tried until one doesn't
-return -EBADMSG.
-
-The parser definition structure can be found in:
-
-	#include <keys/asymmetric-parser.h>
-
-and looks like the following:
-
-	struct asymmetric_key_parser {
-		struct module	*owner;
-		const char	*name;
-
-		int (*parse)(struct key_preparsed_payload *prep);
-	};
-
-The owner and name fields should be set to the owning module and the name of
-the parser.
-
-There is currently only a single operation defined by the parser, and it is
-mandatory:
-
- (1) parse().
-
-     This is called to preparse the key from the key creation and update paths.
-     In particular, it is called during the key creation _before_ a key is
-     allocated, and as such, is permitted to provide the key's description in
-     the case that the caller declines to do so.
-
-     The caller passes a pointer to the following struct with all of the fields
-     cleared, except for data, datalen and quotalen [see
-     Documentation/security/keys/core.rst].
-
-	struct key_preparsed_payload {
-		char		*description;
-		void		*payload[4];
-		const void	*data;
-		size_t		datalen;
-		size_t		quotalen;
-	};
-
-     The instantiation data is in a blob pointed to by data and is datalen in
-     size.  The parse() function is not permitted to change these two values at
-     all, and shouldn't change any of the other values _unless_ they are
-     recognise the blob format and will not return -EBADMSG to indicate it is
-     not theirs.
-
-     If the parser is happy with the blob, it should propose a description for
-     the key and attach it to ->description, ->payload[asym_subtype] should be
-     set to point to the subtype to be used, ->payload[asym_crypto] should be
-     set to point to the initialised data for that subtype,
-     ->payload[asym_key_ids] should point to one or more hex fingerprints and
-     quotalen should be updated to indicate how much quota this key should
-     account for.
-
-     When clearing up, the data attached to ->payload[asym_key_ids] and
-     ->description will be kfree()'d and the data attached to
-     ->payload[asm_crypto] will be passed to the subtype's ->destroy() method
-     to be disposed of.  A module reference for the subtype pointed to by
-     ->payload[asym_subtype] will be put.
-
-
-     If the data format is not recognised, -EBADMSG should be returned.  If it
-     is recognised, but the key cannot for some reason be set up, some other
-     negative error code should be returned.  On success, 0 should be returned.
-
-     The key's fingerprint string may be partially matched upon.  For a
-     public-key algorithm such as RSA and DSA this will likely be a printable
-     hex version of the key's fingerprint.
-
-Functions are provided to register and unregister parsers:
-
-	int register_asymmetric_key_parser(struct asymmetric_key_parser *parser);
-	void unregister_asymmetric_key_parser(struct asymmetric_key_parser *subtype);
-
-Parsers may not have the same name.  The names are otherwise only used for
-displaying in debugging messages.
-
-
-=========================
-KEYRING LINK RESTRICTIONS
-=========================
-
-Keyrings created from userspace using add_key can be configured to check the
-signature of the key being linked.  Keys without a valid signature are not
-allowed to link.
-
-Several restriction methods are available:
-
- (1) Restrict using the kernel builtin trusted keyring
-
-     - Option string used with KEYCTL_RESTRICT_KEYRING:
-       - "builtin_trusted"
-
-     The kernel builtin trusted keyring will be searched for the signing key.
-     If the builtin trusted keyring is not configured, all links will be
-     rejected.  The ca_keys kernel parameter also affects which keys are used
-     for signature verification.
-
- (2) Restrict using the kernel builtin and secondary trusted keyrings
-
-     - Option string used with KEYCTL_RESTRICT_KEYRING:
-       - "builtin_and_secondary_trusted"
-
-     The kernel builtin and secondary trusted keyrings will be searched for the
-     signing key.  If the secondary trusted keyring is not configured, this
-     restriction will behave like the "builtin_trusted" option.  The ca_keys
-     kernel parameter also affects which keys are used for signature
-     verification.
-
- (3) Restrict using a separate key or keyring
-
-     - Option string used with KEYCTL_RESTRICT_KEYRING:
-       - "key_or_keyring:<key or keyring serial number>[:chain]"
-
-     Whenever a key link is requested, the link will only succeed if the key
-     being linked is signed by one of the designated keys.  This key may be
-     specified directly by providing a serial number for one asymmetric key, or
-     a group of keys may be searched for the signing key by providing the
-     serial number for a keyring.
-
-     When the "chain" option is provided at the end of the string, the keys
-     within the destination keyring will also be searched for signing keys.
-     This allows for verification of certificate chains by adding each
-     certificate in order (starting closest to the root) to a keyring.  For
-     instance, one keyring can be populated with links to a set of root
-     certificates, with a separate, restricted keyring set up for each
-     certificate chain to be validated:
-
-	# Create and populate a keyring for root certificates
-	root_id=`keyctl add keyring root-certs "" @s`
-	keyctl padd asymmetric "" $root_id < root1.cert
-	keyctl padd asymmetric "" $root_id < root2.cert
-
-	# Create and restrict a keyring for the certificate chain
-	chain_id=`keyctl add keyring chain "" @s`
-	keyctl restrict_keyring $chain_id asymmetric key_or_keyring:$root_id:chain
-
-	# Attempt to add each certificate in the chain, starting with the
-	# certificate closest to the root.
-	keyctl padd asymmetric "" $chain_id < intermediateA.cert
-	keyctl padd asymmetric "" $chain_id < intermediateB.cert
-	keyctl padd asymmetric "" $chain_id < end-entity.cert
-
-     If the final end-entity certificate is successfully added to the "chain"
-     keyring, we can be certain that it has a valid signing chain going back to
-     one of the root certificates.
-
-     A single keyring can be used to verify a chain of signatures by
-     restricting the keyring after linking the root certificate:
-
-	# Create a keyring for the certificate chain and add the root
-	chain2_id=`keyctl add keyring chain2 "" @s`
-	keyctl padd asymmetric "" $chain2_id < root1.cert
-
-	# Restrict the keyring that already has root1.cert linked.  The cert
-	# will remain linked by the keyring.
-	keyctl restrict_keyring $chain2_id asymmetric key_or_keyring:0:chain
-
-	# Attempt to add each certificate in the chain, starting with the
-	# certificate closest to the root.
-	keyctl padd asymmetric "" $chain2_id < intermediateA.cert
-	keyctl padd asymmetric "" $chain2_id < intermediateB.cert
-	keyctl padd asymmetric "" $chain2_id < end-entity.cert
-
-     If the final end-entity certificate is successfully added to the "chain2"
-     keyring, we can be certain that there is a valid signing chain going back
-     to the root certificate that was added before the keyring was restricted.
-
-
-In all of these cases, if the signing key is found the signature of the key to
-be linked will be verified using the signing key.  The requested key is added
-to the keyring only if the signature is successfully verified.  -ENOKEY is
-returned if the parent certificate could not be found, or -EKEYREJECTED is
-returned if the signature check fails or the key is blacklisted.  Other errors
-may be returned if the signature check could not be performed.
diff --git a/Documentation/crypto/async-tx-api.rst b/Documentation/crypto/async-tx-api.rst
new file mode 100644
index 0000000..bfc7739
--- /dev/null
+++ b/Documentation/crypto/async-tx-api.rst
@@ -0,0 +1,270 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=====================================
+Asynchronous Transfers/Transforms API
+=====================================
+
+.. Contents
+
+  1. INTRODUCTION
+
+  2 GENEALOGY
+
+  3 USAGE
+  3.1 General format of the API
+  3.2 Supported operations
+  3.3 Descriptor management
+  3.4 When does the operation execute?
+  3.5 When does the operation complete?
+  3.6 Constraints
+  3.7 Example
+
+  4 DMAENGINE DRIVER DEVELOPER NOTES
+  4.1 Conformance points
+  4.2 "My application needs exclusive control of hardware channels"
+
+  5 SOURCE
+
+1. Introduction
+===============
+
+The async_tx API provides methods for describing a chain of asynchronous
+bulk memory transfers/transforms with support for inter-transactional
+dependencies.  It is implemented as a dmaengine client that smooths over
+the details of different hardware offload engine implementations.  Code
+that is written to the API can optimize for asynchronous operation and
+the API will fit the chain of operations to the available offload
+resources.
+
+2.Genealogy
+===========
+
+The API was initially designed to offload the memory copy and
+xor-parity-calculations of the md-raid5 driver using the offload engines
+present in the Intel(R) Xscale series of I/O processors.  It also built
+on the 'dmaengine' layer developed for offloading memory copies in the
+network stack using Intel(R) I/OAT engines.  The following design
+features surfaced as a result:
+
+1. implicit synchronous path: users of the API do not need to know if
+   the platform they are running on has offload capabilities.  The
+   operation will be offloaded when an engine is available and carried out
+   in software otherwise.
+2. cross channel dependency chains: the API allows a chain of dependent
+   operations to be submitted, like xor->copy->xor in the raid5 case.  The
+   API automatically handles cases where the transition from one operation
+   to another implies a hardware channel switch.
+3. dmaengine extensions to support multiple clients and operation types
+   beyond 'memcpy'
+
+3. Usage
+========
+
+3.1 General format of the API
+-----------------------------
+
+::
+
+  struct dma_async_tx_descriptor *
+  async_<operation>(<op specific parameters>, struct async_submit ctl *submit)
+
+3.2 Supported operations
+------------------------
+
+========  ====================================================================
+memcpy    memory copy between a source and a destination buffer
+memset    fill a destination buffer with a byte value
+xor       xor a series of source buffers and write the result to a
+	  destination buffer
+xor_val   xor a series of source buffers and set a flag if the
+	  result is zero.  The implementation attempts to prevent
+	  writes to memory
+pq	  generate the p+q (raid6 syndrome) from a series of source buffers
+pq_val    validate that a p and or q buffer are in sync with a given series of
+	  sources
+datap	  (raid6_datap_recov) recover a raid6 data block and the p block
+	  from the given sources
+2data	  (raid6_2data_recov) recover 2 raid6 data blocks from the given
+	  sources
+========  ====================================================================
+
+3.3 Descriptor management
+-------------------------
+
+The return value is non-NULL and points to a 'descriptor' when the operation
+has been queued to execute asynchronously.  Descriptors are recycled
+resources, under control of the offload engine driver, to be reused as
+operations complete.  When an application needs to submit a chain of
+operations it must guarantee that the descriptor is not automatically recycled
+before the dependency is submitted.  This requires that all descriptors be
+acknowledged by the application before the offload engine driver is allowed to
+recycle (or free) the descriptor.  A descriptor can be acked by one of the
+following methods:
+
+1. setting the ASYNC_TX_ACK flag if no child operations are to be submitted
+2. submitting an unacknowledged descriptor as a dependency to another
+   async_tx call will implicitly set the acknowledged state.
+3. calling async_tx_ack() on the descriptor.
+
+3.4 When does the operation execute?
+------------------------------------
+
+Operations do not immediately issue after return from the
+async_<operation> call.  Offload engine drivers batch operations to
+improve performance by reducing the number of mmio cycles needed to
+manage the channel.  Once a driver-specific threshold is met the driver
+automatically issues pending operations.  An application can force this
+event by calling async_tx_issue_pending_all().  This operates on all
+channels since the application has no knowledge of channel to operation
+mapping.
+
+3.5 When does the operation complete?
+-------------------------------------
+
+There are two methods for an application to learn about the completion
+of an operation.
+
+1. Call dma_wait_for_async_tx().  This call causes the CPU to spin while
+   it polls for the completion of the operation.  It handles dependency
+   chains and issuing pending operations.
+2. Specify a completion callback.  The callback routine runs in tasklet
+   context if the offload engine driver supports interrupts, or it is
+   called in application context if the operation is carried out
+   synchronously in software.  The callback can be set in the call to
+   async_<operation>, or when the application needs to submit a chain of
+   unknown length it can use the async_trigger_callback() routine to set a
+   completion interrupt/callback at the end of the chain.
+
+3.6 Constraints
+---------------
+
+1. Calls to async_<operation> are not permitted in IRQ context.  Other
+   contexts are permitted provided constraint #2 is not violated.
+2. Completion callback routines cannot submit new operations.  This
+   results in recursion in the synchronous case and spin_locks being
+   acquired twice in the asynchronous case.
+
+3.7 Example
+-----------
+
+Perform a xor->copy->xor operation where each operation depends on the
+result from the previous operation::
+
+    void callback(void *param)
+    {
+	    struct completion *cmp = param;
+
+	    complete(cmp);
+    }
+
+    void run_xor_copy_xor(struct page **xor_srcs,
+			int xor_src_cnt,
+			struct page *xor_dest,
+			size_t xor_len,
+			struct page *copy_src,
+			struct page *copy_dest,
+			size_t copy_len)
+    {
+	    struct dma_async_tx_descriptor *tx;
+	    addr_conv_t addr_conv[xor_src_cnt];
+	    struct async_submit_ctl submit;
+	    addr_conv_t addr_conv[NDISKS];
+	    struct completion cmp;
+
+	    init_async_submit(&submit, ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL,
+			    addr_conv);
+	    tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, &submit)
+
+	    submit->depend_tx = tx;
+	    tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len, &submit);
+
+	    init_completion(&cmp);
+	    init_async_submit(&submit, ASYNC_TX_XOR_DROP_DST | ASYNC_TX_ACK, tx,
+			    callback, &cmp, addr_conv);
+	    tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, &submit);
+
+	    async_tx_issue_pending_all();
+
+	    wait_for_completion(&cmp);
+    }
+
+See include/linux/async_tx.h for more information on the flags.  See the
+ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more
+implementation examples.
+
+4. Driver Development Notes
+===========================
+
+4.1 Conformance points
+----------------------
+
+There are a few conformance points required in dmaengine drivers to
+accommodate assumptions made by applications using the async_tx API:
+
+1. Completion callbacks are expected to happen in tasklet context
+2. dma_async_tx_descriptor fields are never manipulated in IRQ context
+3. Use async_tx_run_dependencies() in the descriptor clean up path to
+   handle submission of dependent operations
+
+4.2 "My application needs exclusive control of hardware channels"
+-----------------------------------------------------------------
+
+Primarily this requirement arises from cases where a DMA engine driver
+is being used to support device-to-memory operations.  A channel that is
+performing these operations cannot, for many platform specific reasons,
+be shared.  For these cases the dma_request_channel() interface is
+provided.
+
+The interface is::
+
+  struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
+				       dma_filter_fn filter_fn,
+				       void *filter_param);
+
+Where dma_filter_fn is defined as::
+
+  typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
+
+When the optional 'filter_fn' parameter is set to NULL
+dma_request_channel simply returns the first channel that satisfies the
+capability mask.  Otherwise, when the mask parameter is insufficient for
+specifying the necessary channel, the filter_fn routine can be used to
+disposition the available channels in the system. The filter_fn routine
+is called once for each free channel in the system.  Upon seeing a
+suitable channel filter_fn returns DMA_ACK which flags that channel to
+be the return value from dma_request_channel.  A channel allocated via
+this interface is exclusive to the caller, until dma_release_channel()
+is called.
+
+The DMA_PRIVATE capability flag is used to tag dma devices that should
+not be used by the general-purpose allocator.  It can be set at
+initialization time if it is known that a channel will always be
+private.  Alternatively, it is set when dma_request_channel() finds an
+unused "public" channel.
+
+A couple caveats to note when implementing a driver and consumer:
+
+1. Once a channel has been privately allocated it will no longer be
+   considered by the general-purpose allocator even after a call to
+   dma_release_channel().
+2. Since capabilities are specified at the device level a dma_device
+   with multiple channels will either have all channels public, or all
+   channels private.
+
+5. Source
+---------
+
+include/linux/dmaengine.h:
+    core header file for DMA drivers and api users
+drivers/dma/dmaengine.c:
+    offload engine channel management routines
+drivers/dma/:
+    location for offload engine drivers
+include/linux/async_tx.h:
+    core header file for the async_tx api
+crypto/async_tx/async_tx.c:
+    async_tx interface to dmaengine and common code
+crypto/async_tx/async_memcpy.c:
+    copy offload
+crypto/async_tx/async_xor.c:
+    xor and xor zero sum offload
diff --git a/Documentation/crypto/async-tx-api.txt b/Documentation/crypto/async-tx-api.txt
deleted file mode 100644
index 7bf1be2..0000000
--- a/Documentation/crypto/async-tx-api.txt
+++ /dev/null
@@ -1,225 +0,0 @@
-		 Asynchronous Transfers/Transforms API
-
-1 INTRODUCTION
-
-2 GENEALOGY
-
-3 USAGE
-3.1 General format of the API
-3.2 Supported operations
-3.3 Descriptor management
-3.4 When does the operation execute?
-3.5 When does the operation complete?
-3.6 Constraints
-3.7 Example
-
-4 DMAENGINE DRIVER DEVELOPER NOTES
-4.1 Conformance points
-4.2 "My application needs exclusive control of hardware channels"
-
-5 SOURCE
-
----
-
-1 INTRODUCTION
-
-The async_tx API provides methods for describing a chain of asynchronous
-bulk memory transfers/transforms with support for inter-transactional
-dependencies.  It is implemented as a dmaengine client that smooths over
-the details of different hardware offload engine implementations.  Code
-that is written to the API can optimize for asynchronous operation and
-the API will fit the chain of operations to the available offload
-resources.
-
-2 GENEALOGY
-
-The API was initially designed to offload the memory copy and
-xor-parity-calculations of the md-raid5 driver using the offload engines
-present in the Intel(R) Xscale series of I/O processors.  It also built
-on the 'dmaengine' layer developed for offloading memory copies in the
-network stack using Intel(R) I/OAT engines.  The following design
-features surfaced as a result:
-1/ implicit synchronous path: users of the API do not need to know if
-   the platform they are running on has offload capabilities.  The
-   operation will be offloaded when an engine is available and carried out
-   in software otherwise.
-2/ cross channel dependency chains: the API allows a chain of dependent
-   operations to be submitted, like xor->copy->xor in the raid5 case.  The
-   API automatically handles cases where the transition from one operation
-   to another implies a hardware channel switch.
-3/ dmaengine extensions to support multiple clients and operation types
-   beyond 'memcpy'
-
-3 USAGE
-
-3.1 General format of the API:
-struct dma_async_tx_descriptor *
-async_<operation>(<op specific parameters>, struct async_submit ctl *submit)
-
-3.2 Supported operations:
-memcpy  - memory copy between a source and a destination buffer
-memset  - fill a destination buffer with a byte value
-xor     - xor a series of source buffers and write the result to a
-	  destination buffer
-xor_val - xor a series of source buffers and set a flag if the
-	  result is zero.  The implementation attempts to prevent
-	  writes to memory
-pq	- generate the p+q (raid6 syndrome) from a series of source buffers
-pq_val  - validate that a p and or q buffer are in sync with a given series of
-	  sources
-datap	- (raid6_datap_recov) recover a raid6 data block and the p block
-	  from the given sources
-2data	- (raid6_2data_recov) recover 2 raid6 data blocks from the given
-	  sources
-
-3.3 Descriptor management:
-The return value is non-NULL and points to a 'descriptor' when the operation
-has been queued to execute asynchronously.  Descriptors are recycled
-resources, under control of the offload engine driver, to be reused as
-operations complete.  When an application needs to submit a chain of
-operations it must guarantee that the descriptor is not automatically recycled
-before the dependency is submitted.  This requires that all descriptors be
-acknowledged by the application before the offload engine driver is allowed to
-recycle (or free) the descriptor.  A descriptor can be acked by one of the
-following methods:
-1/ setting the ASYNC_TX_ACK flag if no child operations are to be submitted
-2/ submitting an unacknowledged descriptor as a dependency to another
-   async_tx call will implicitly set the acknowledged state.
-3/ calling async_tx_ack() on the descriptor.
-
-3.4 When does the operation execute?
-Operations do not immediately issue after return from the
-async_<operation> call.  Offload engine drivers batch operations to
-improve performance by reducing the number of mmio cycles needed to
-manage the channel.  Once a driver-specific threshold is met the driver
-automatically issues pending operations.  An application can force this
-event by calling async_tx_issue_pending_all().  This operates on all
-channels since the application has no knowledge of channel to operation
-mapping.
-
-3.5 When does the operation complete?
-There are two methods for an application to learn about the completion
-of an operation.
-1/ Call dma_wait_for_async_tx().  This call causes the CPU to spin while
-   it polls for the completion of the operation.  It handles dependency
-   chains and issuing pending operations.
-2/ Specify a completion callback.  The callback routine runs in tasklet
-   context if the offload engine driver supports interrupts, or it is
-   called in application context if the operation is carried out
-   synchronously in software.  The callback can be set in the call to
-   async_<operation>, or when the application needs to submit a chain of
-   unknown length it can use the async_trigger_callback() routine to set a
-   completion interrupt/callback at the end of the chain.
-
-3.6 Constraints:
-1/ Calls to async_<operation> are not permitted in IRQ context.  Other
-   contexts are permitted provided constraint #2 is not violated.
-2/ Completion callback routines cannot submit new operations.  This
-   results in recursion in the synchronous case and spin_locks being
-   acquired twice in the asynchronous case.
-
-3.7 Example:
-Perform a xor->copy->xor operation where each operation depends on the
-result from the previous operation:
-
-void callback(void *param)
-{
-	struct completion *cmp = param;
-
-	complete(cmp);
-}
-
-void run_xor_copy_xor(struct page **xor_srcs,
-		      int xor_src_cnt,
-		      struct page *xor_dest,
-		      size_t xor_len,
-		      struct page *copy_src,
-		      struct page *copy_dest,
-		      size_t copy_len)
-{
-	struct dma_async_tx_descriptor *tx;
-	addr_conv_t addr_conv[xor_src_cnt];
-	struct async_submit_ctl submit;
-	addr_conv_t addr_conv[NDISKS];
-	struct completion cmp;
-
-	init_async_submit(&submit, ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL,
-			  addr_conv);
-	tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, &submit)
-
-	submit->depend_tx = tx;
-	tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len, &submit);
-
-	init_completion(&cmp);
-	init_async_submit(&submit, ASYNC_TX_XOR_DROP_DST | ASYNC_TX_ACK, tx,
-			  callback, &cmp, addr_conv);
-	tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, &submit);
-
-	async_tx_issue_pending_all();
-
-	wait_for_completion(&cmp);
-}
-
-See include/linux/async_tx.h for more information on the flags.  See the
-ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more
-implementation examples.
-
-4 DRIVER DEVELOPMENT NOTES
-
-4.1 Conformance points:
-There are a few conformance points required in dmaengine drivers to
-accommodate assumptions made by applications using the async_tx API:
-1/ Completion callbacks are expected to happen in tasklet context
-2/ dma_async_tx_descriptor fields are never manipulated in IRQ context
-3/ Use async_tx_run_dependencies() in the descriptor clean up path to
-   handle submission of dependent operations
-
-4.2 "My application needs exclusive control of hardware channels"
-Primarily this requirement arises from cases where a DMA engine driver
-is being used to support device-to-memory operations.  A channel that is
-performing these operations cannot, for many platform specific reasons,
-be shared.  For these cases the dma_request_channel() interface is
-provided.
-
-The interface is:
-struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
-				     dma_filter_fn filter_fn,
-				     void *filter_param);
-
-Where dma_filter_fn is defined as:
-typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
-
-When the optional 'filter_fn' parameter is set to NULL
-dma_request_channel simply returns the first channel that satisfies the
-capability mask.  Otherwise, when the mask parameter is insufficient for
-specifying the necessary channel, the filter_fn routine can be used to
-disposition the available channels in the system. The filter_fn routine
-is called once for each free channel in the system.  Upon seeing a
-suitable channel filter_fn returns DMA_ACK which flags that channel to
-be the return value from dma_request_channel.  A channel allocated via
-this interface is exclusive to the caller, until dma_release_channel()
-is called.
-
-The DMA_PRIVATE capability flag is used to tag dma devices that should
-not be used by the general-purpose allocator.  It can be set at
-initialization time if it is known that a channel will always be
-private.  Alternatively, it is set when dma_request_channel() finds an
-unused "public" channel.
-
-A couple caveats to note when implementing a driver and consumer:
-1/ Once a channel has been privately allocated it will no longer be
-   considered by the general-purpose allocator even after a call to
-   dma_release_channel().
-2/ Since capabilities are specified at the device level a dma_device
-   with multiple channels will either have all channels public, or all
-   channels private.
-
-5 SOURCE
-
-include/linux/dmaengine.h: core header file for DMA drivers and api users
-drivers/dma/dmaengine.c: offload engine channel management routines
-drivers/dma/: location for offload engine drivers
-include/linux/async_tx.h: core header file for the async_tx api
-crypto/async_tx/async_tx.c: async_tx interface to dmaengine and common code
-crypto/async_tx/async_memcpy.c: copy offload
-crypto/async_tx/async_xor.c: xor and xor zero sum offload
diff --git a/Documentation/crypto/descore-readme.rst b/Documentation/crypto/descore-readme.rst
new file mode 100644
index 0000000..45bd9c8
--- /dev/null
+++ b/Documentation/crypto/descore-readme.rst
@@ -0,0 +1,414 @@
+.. SPDX-License-Identifier: GPL-2.0
+.. include:: <isonum.txt>
+
+===========================================
+Fast & Portable DES encryption & decryption
+===========================================
+
+.. note::
+
+   Below is the original README file from the descore.shar package,
+   converted to ReST format.
+
+------------------------------------------------------------------------------
+
+des - fast & portable DES encryption & decryption.
+
+Copyright |copy| 1992  Dana L. How
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU Library General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU Library General Public License for more details.
+
+You should have received a copy of the GNU Library General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+Author's address: how@isl.stanford.edu
+
+.. README,v 1.15 1992/05/20 00:25:32 how E
+
+==>> To compile after untarring/unsharring, just ``make`` <<==
+
+This package was designed with the following goals:
+
+1.	Highest possible encryption/decryption PERFORMANCE.
+2.	PORTABILITY to any byte-addressable host with a 32bit unsigned C type
+3.	Plug-compatible replacement for KERBEROS's low-level routines.
+
+This second release includes a number of performance enhancements for
+register-starved machines.  My discussions with Richard Outerbridge,
+71755.204@compuserve.com, sparked a number of these enhancements.
+
+To more rapidly understand the code in this package, inspect desSmallFips.i
+(created by typing ``make``) BEFORE you tackle desCode.h.  The latter is set
+up in a parameterized fashion so it can easily be modified by speed-daemon
+hackers in pursuit of that last microsecond.  You will find it more
+illuminating to inspect one specific implementation,
+and then move on to the common abstract skeleton with this one in mind.
+
+
+performance comparison to other available des code which i could
+compile on a SPARCStation 1 (cc -O4, gcc -O2):
+
+this code (byte-order independent):
+
+  - 30us per encryption (options: 64k tables, no IP/FP)
+  - 33us per encryption (options: 64k tables, FIPS standard bit ordering)
+  - 45us per encryption (options:  2k tables, no IP/FP)
+  - 48us per encryption (options:  2k tables, FIPS standard bit ordering)
+  - 275us to set a new key (uses 1k of key tables)
+
+	this has the quickest encryption/decryption routines i've seen.
+	since i was interested in fast des filters rather than crypt(3)
+	and password cracking, i haven't really bothered yet to speed up
+	the key setting routine. also, i have no interest in re-implementing
+	all the other junk in the mit kerberos des library, so i've just
+	provided my routines with little stub interfaces so they can be
+	used as drop-in replacements with mit's code or any of the mit-
+	compatible packages below. (note that the first two timings above
+	are highly variable because of cache effects).
+
+kerberos des replacement from australia (version 1.95):
+
+  - 53us per encryption (uses 2k of tables)
+  - 96us to set a new key (uses 2.25k of key tables)
+
+	so despite the author's inclusion of some of the performance
+	improvements i had suggested to him, this package's
+	encryption/decryption is still slower on the sparc and 68000.
+	more specifically, 19-40% slower on the 68020 and 11-35% slower
+	on the sparc,  depending on the compiler;
+	in full gory detail (ALT_ECB is a libdes variant):
+
+	===============	==============	===============	=================
+	compiler   	machine		desCore	libdes	ALT_ECB	slower by
+	===============	==============	===============	=================
+	gcc 2.1 -O2	Sun 3/110	304  uS	369.5uS	461.8uS	 22%
+	cc      -O1	Sun 3/110	336  uS	436.6uS	399.3uS	 19%
+	cc      -O2	Sun 3/110	360  uS	532.4uS	505.1uS	 40%
+	cc      -O4	Sun 3/110	365  uS	532.3uS	505.3uS	 38%
+	gcc 2.1 -O2	Sun 4/50	 48  uS	 53.4uS	 57.5uS	 11%
+	cc      -O2	Sun 4/50	 48  uS	 64.6uS	 64.7uS	 35%
+	cc      -O4	Sun 4/50	 48  uS	 64.7uS	 64.9uS	 35%
+	===============	==============	===============	=================
+
+	(my time measurements are not as accurate as his).
+
+   the comments in my first release of desCore on version 1.92:
+
+   - 68us per encryption (uses 2k of tables)
+   - 96us to set a new key (uses 2.25k of key tables)
+
+	this is a very nice package which implements the most important
+	of the optimizations which i did in my encryption routines.
+	it's a bit weak on common low-level optimizations which is why
+	it's 39%-106% slower.  because he was interested in fast crypt(3) and
+	password-cracking applications,  he also used the same ideas to
+	speed up the key-setting routines with impressive results.
+	(at some point i may do the same in my package).  he also implements
+	the rest of the mit des library.
+
+	(code from eay@psych.psy.uq.oz.au via comp.sources.misc)
+
+fast crypt(3) package from denmark:
+
+	the des routine here is buried inside a loop to do the
+	crypt function and i didn't feel like ripping it out and measuring
+	performance. his code takes 26 sparc instructions to compute one
+	des iteration; above, Quick (64k) takes 21 and Small (2k) takes 37.
+	he claims to use 280k of tables but the iteration calculation seems
+	to use only 128k.  his tables and code are machine independent.
+
+	(code from glad@daimi.aau.dk via alt.sources or comp.sources.misc)
+
+swedish reimplementation of Kerberos des library
+
+  - 108us per encryption (uses 34k worth of tables)
+  - 134us to set a new key (uses 32k of key tables to get this speed!)
+
+	the tables used seem to be machine-independent;
+	he seems to have included a lot of special case code
+	so that, e.g., ``long`` loads can be used instead of 4 ``char`` loads
+	when the machine's architecture allows it.
+
+	(code obtained from chalmers.se:pub/des)
+
+crack 3.3c package from england:
+
+	as in crypt above, the des routine is buried in a loop. it's
+	also very modified for crypt.  his iteration code uses 16k
+	of tables and appears to be slow.
+
+	(code obtained from aem@aber.ac.uk via alt.sources or comp.sources.misc)
+
+``highly optimized`` and tweaked Kerberos/Athena code (byte-order dependent):
+
+  - 165us per encryption (uses 6k worth of tables)
+  - 478us to set a new key (uses <1k of key tables)
+
+	so despite the comments in this code, it was possible to get
+	faster code AND smaller tables, as well as making the tables
+	machine-independent.
+	(code obtained from prep.ai.mit.edu)
+
+UC Berkeley code (depends on machine-endedness):
+  -  226us per encryption
+  - 10848us to set a new key
+
+	table sizes are unclear, but they don't look very small
+	(code obtained from wuarchive.wustl.edu)
+
+
+motivation and history
+======================
+
+a while ago i wanted some des routines and the routines documented on sun's
+man pages either didn't exist or dumped core.  i had heard of kerberos,
+and knew that it used des,  so i figured i'd use its routines.  but once
+i got it and looked at the code,  it really set off a lot of pet peeves -
+it was too convoluted, the code had been written without taking
+advantage of the regular structure of operations such as IP, E, and FP
+(i.e. the author didn't sit down and think before coding),
+it was excessively slow,  the author had attempted to clarify the code
+by adding MORE statements to make the data movement more ``consistent``
+instead of simplifying his implementation and cutting down on all data
+movement (in particular, his use of L1, R1, L2, R2), and it was full of
+idiotic ``tweaks`` for particular machines which failed to deliver significant
+speedups but which did obfuscate everything.  so i took the test data
+from his verification program and rewrote everything else.
+
+a while later i ran across the great crypt(3) package mentioned above.
+the fact that this guy was computing 2 sboxes per table lookup rather
+than one (and using a MUCH larger table in the process) emboldened me to
+do the same - it was a trivial change from which i had been scared away
+by the larger table size.  in his case he didn't realize you don't need to keep
+the working data in TWO forms, one for easy use of half the sboxes in
+indexing, the other for easy use of the other half; instead you can keep
+it in the form for the first half and use a simple rotate to get the other
+half.  this means i have (almost) half the data manipulation and half
+the table size.  in fairness though he might be encoding something particular
+to crypt(3) in his tables - i didn't check.
+
+i'm glad that i implemented it the way i did, because this C version is
+portable (the ifdef's are performance enhancements) and it is faster
+than versions hand-written in assembly for the sparc!
+
+
+porting notes
+=============
+
+one thing i did not want to do was write an enormous mess
+which depended on endedness and other machine quirks,
+and which necessarily produced different code and different lookup tables
+for different machines.  see the kerberos code for an example
+of what i didn't want to do; all their endedness-specific ``optimizations``
+obfuscate the code and in the end were slower than a simpler machine
+independent approach.  however, there are always some portability
+considerations of some kind, and i have included some options
+for varying numbers of register variables.
+perhaps some will still regard the result as a mess!
+
+1) i assume everything is byte addressable, although i don't actually
+   depend on the byte order, and that bytes are 8 bits.
+   i assume word pointers can be freely cast to and from char pointers.
+   note that 99% of C programs make these assumptions.
+   i always use unsigned char's if the high bit could be set.
+2) the typedef ``word`` means a 32 bit unsigned integral type.
+   if ``unsigned long`` is not 32 bits, change the typedef in desCore.h.
+   i assume sizeof(word) == 4 EVERYWHERE.
+
+the (worst-case) cost of my NOT doing endedness-specific optimizations
+in the data loading and storing code surrounding the key iterations
+is less than 12%.  also, there is the added benefit that
+the input and output work areas do not need to be word-aligned.
+
+
+OPTIONAL performance optimizations
+==================================
+
+1) you should define one of ``i386,`` ``vax,`` ``mc68000,`` or ``sparc,``
+   whichever one is closest to the capabilities of your machine.
+   see the start of desCode.h to see exactly what this selection implies.
+   note that if you select the wrong one, the des code will still work;
+   these are just performance tweaks.
+2) for those with functional ``asm`` keywords: you should change the
+   ROR and ROL macros to use machine rotate instructions if you have them.
+   this will save 2 instructions and a temporary per use,
+   or about 32 to 40 instructions per en/decryption.
+
+   note that gcc is smart enough to translate the ROL/R macros into
+   machine rotates!
+
+these optimizations are all rather persnickety, yet with them you should
+be able to get performance equal to assembly-coding, except that:
+
+1) with the lack of a bit rotate operator in C, rotates have to be synthesized
+   from shifts.  so access to ``asm`` will speed things up if your machine
+   has rotates, as explained above in (3) (not necessary if you use gcc).
+2) if your machine has less than 12 32-bit registers i doubt your compiler will
+   generate good code.
+
+   ``i386`` tries to configure the code for a 386 by only declaring 3 registers
+   (it appears that gcc can use ebx, esi and edi to hold register variables).
+   however, if you like assembly coding, the 386 does have 7 32-bit registers,
+   and if you use ALL of them, use ``scaled by 8`` address modes with displacement
+   and other tricks, you can get reasonable routines for DesQuickCore... with
+   about 250 instructions apiece.  For DesSmall... it will help to rearrange
+   des_keymap, i.e., now the sbox # is the high part of the index and
+   the 6 bits of data is the low part; it helps to exchange these.
+
+   since i have no way to conveniently test it i have not provided my
+   shoehorned 386 version.  note that with this release of desCore, gcc is able
+   to put everything in registers(!), and generate about 370 instructions apiece
+   for the DesQuickCore... routines!
+
+coding notes
+============
+
+the en/decryption routines each use 6 necessary register variables,
+with 4 being actively used at once during the inner iterations.
+if you don't have 4 register variables get a new machine.
+up to 8 more registers are used to hold constants in some configurations.
+
+i assume that the use of a constant is more expensive than using a register:
+
+a) additionally, i have tried to put the larger constants in registers.
+   registering priority was by the following:
+
+	- anything more than 12 bits (bad for RISC and CISC)
+	- greater than 127 in value (can't use movq or byte immediate on CISC)
+	- 9-127 (may not be able to use CISC shift immediate or add/sub quick),
+	- 1-8 were never registered, being the cheapest constants.
+
+b) the compiler may be too stupid to realize table and table+256 should
+   be assigned to different constant registers and instead repetitively
+   do the arithmetic, so i assign these to explicit ``m`` register variables
+   when possible and helpful.
+
+i assume that indexing is cheaper or equivalent to auto increment/decrement,
+where the index is 7 bits unsigned or smaller.
+this assumption is reversed for 68k and vax.
+
+i assume that addresses can be cheaply formed from two registers,
+or from a register and a small constant.
+for the 68000, the ``two registers and small offset`` form is used sparingly.
+all index scaling is done explicitly - no hidden shifts by log2(sizeof).
+
+the code is written so that even a dumb compiler
+should never need more than one hidden temporary,
+increasing the chance that everything will fit in the registers.
+KEEP THIS MORE SUBTLE POINT IN MIND IF YOU REWRITE ANYTHING.
+
+(actually, there are some code fragments now which do require two temps,
+but fixing it would either break the structure of the macros or
+require declaring another temporary).
+
+
+special efficient data format
+==============================
+
+bits are manipulated in this arrangement most of the time (S7 S5 S3 S1)::
+
+	003130292827xxxx242322212019xxxx161514131211xxxx080706050403xxxx
+
+(the x bits are still there, i'm just emphasizing where the S boxes are).
+bits are rotated left 4 when computing S6 S4 S2 S0::
+
+	282726252423xxxx201918171615xxxx121110090807xxxx040302010031xxxx
+
+the rightmost two bits are usually cleared so the lower byte can be used
+as an index into an sbox mapping table. the next two x'd bits are set
+to various values to access different parts of the tables.
+
+
+how to use the routines
+
+datatypes:
+	pointer to 8 byte area of type DesData
+	used to hold keys and input/output blocks to des.
+
+	pointer to 128 byte area of type DesKeys
+	used to hold full 768-bit key.
+	must be long-aligned.
+
+DesQuickInit()
+	call this before using any other routine with ``Quick`` in its name.
+	it generates the special 64k table these routines need.
+DesQuickDone()
+	frees this table
+
+DesMethod(m, k)
+	m points to a 128byte block, k points to an 8 byte des key
+	which must have odd parity (or -1 is returned) and which must
+	not be a (semi-)weak key (or -2 is returned).
+	normally DesMethod() returns 0.
+
+	m is filled in from k so that when one of the routines below
+	is called with m, the routine will act like standard des
+	en/decryption with the key k. if you use DesMethod,
+	you supply a standard 56bit key; however, if you fill in
+	m yourself, you will get a 768bit key - but then it won't
+	be standard.  it's 768bits not 1024 because the least significant
+	two bits of each byte are not used.  note that these two bits
+	will be set to magic constants which speed up the encryption/decryption
+	on some machines.  and yes, each byte controls
+	a specific sbox during a specific iteration.
+
+	you really shouldn't use the 768bit format directly;  i should
+	provide a routine that converts 128 6-bit bytes (specified in
+	S-box mapping order or something) into the right format for you.
+	this would entail some byte concatenation and rotation.
+
+Des{Small|Quick}{Fips|Core}{Encrypt|Decrypt}(d, m, s)
+	performs des on the 8 bytes at s into the 8 bytes at
+	``d. (d,s: char *)``.
+
+	uses m as a 768bit key as explained above.
+
+	the Encrypt|Decrypt choice is obvious.
+
+	Fips|Core determines whether a completely standard FIPS initial
+	and final permutation is done; if not, then the data is loaded
+	and stored in a nonstandard bit order (FIPS w/o IP/FP).
+
+	Fips slows down Quick by 10%, Small by 9%.
+
+	Small|Quick determines whether you use the normal routine
+	or the crazy quick one which gobbles up 64k more of memory.
+	Small is 50% slower then Quick, but Quick needs 32 times as much
+	memory.  Quick is included for programs that do nothing but DES,
+	e.g., encryption filters, etc.
+
+
+Getting it to compile on your machine
+=====================================
+
+there are no machine-dependencies in the code (see porting),
+except perhaps the ``now()`` macro in desTest.c.
+ALL generated tables are machine independent.
+you should edit the Makefile with the appropriate optimization flags
+for your compiler (MAX optimization).
+
+
+Speeding up kerberos (and/or its des library)
+=============================================
+
+note that i have included a kerberos-compatible interface in desUtil.c
+through the functions des_key_sched() and des_ecb_encrypt().
+to use these with kerberos or kerberos-compatible code put desCore.a
+ahead of the kerberos-compatible library on your linker's command line.
+you should not need to #include desCore.h;  just include the header
+file provided with the kerberos library.
+
+Other uses
+==========
+
+the macros in desCode.h would be very useful for putting inline des
+functions in more complicated encryption routines.
diff --git a/Documentation/crypto/descore-readme.txt b/Documentation/crypto/descore-readme.txt
deleted file mode 100644
index 16e9e63..0000000
--- a/Documentation/crypto/descore-readme.txt
+++ /dev/null
@@ -1,352 +0,0 @@
-Below is the original README file from the descore.shar package.
-------------------------------------------------------------------------------
-
-des - fast & portable DES encryption & decryption.
-Copyright (C) 1992  Dana L. How
-
-This program is free software; you can redistribute it and/or modify
-it under the terms of the GNU Library General Public License as published by
-the Free Software Foundation; either version 2 of the License, or
-(at your option) any later version.
-
-This program is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-GNU Library General Public License for more details.
-
-You should have received a copy of the GNU Library General Public License
-along with this program; if not, write to the Free Software
-Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-
-Author's address: how@isl.stanford.edu
-
-$Id: README,v 1.15 1992/05/20 00:25:32 how E $
-
-
-==>> To compile after untarring/unsharring, just `make' <<==
-
-
-This package was designed with the following goals:
-1.	Highest possible encryption/decryption PERFORMANCE.
-2.	PORTABILITY to any byte-addressable host with a 32bit unsigned C type
-3.	Plug-compatible replacement for KERBEROS's low-level routines.
-
-This second release includes a number of performance enhancements for
-register-starved machines.  My discussions with Richard Outerbridge,
-71755.204@compuserve.com, sparked a number of these enhancements.
-
-To more rapidly understand the code in this package, inspect desSmallFips.i
-(created by typing `make') BEFORE you tackle desCode.h.  The latter is set
-up in a parameterized fashion so it can easily be modified by speed-daemon
-hackers in pursuit of that last microsecond.  You will find it more
-illuminating to inspect one specific implementation,
-and then move on to the common abstract skeleton with this one in mind.
-
-
-performance comparison to other available des code which i could
-compile on a SPARCStation 1 (cc -O4, gcc -O2):
-
-this code (byte-order independent):
-   30us per encryption (options: 64k tables, no IP/FP)
-   33us per encryption (options: 64k tables, FIPS standard bit ordering)
-   45us per encryption (options:  2k tables, no IP/FP)
-   48us per encryption (options:  2k tables, FIPS standard bit ordering)
-  275us to set a new key (uses 1k of key tables)
-	this has the quickest encryption/decryption routines i've seen.
-	since i was interested in fast des filters rather than crypt(3)
-	and password cracking, i haven't really bothered yet to speed up
-	the key setting routine. also, i have no interest in re-implementing
-	all the other junk in the mit kerberos des library, so i've just
-	provided my routines with little stub interfaces so they can be
-	used as drop-in replacements with mit's code or any of the mit-
-	compatible packages below. (note that the first two timings above
-	are highly variable because of cache effects).
-
-kerberos des replacement from australia (version 1.95):
-   53us per encryption (uses 2k of tables)
-   96us to set a new key (uses 2.25k of key tables)
-	so despite the author's inclusion of some of the performance
-	improvements i had suggested to him, this package's
-	encryption/decryption is still slower on the sparc and 68000.
-	more specifically, 19-40% slower on the 68020 and 11-35% slower
-	on the sparc,  depending on the compiler;
-	in full gory detail (ALT_ECB is a libdes variant):
-	compiler   	machine		desCore	libdes	ALT_ECB	slower by
-	gcc 2.1 -O2	Sun 3/110	304  uS	369.5uS	461.8uS	 22%
-	cc      -O1	Sun 3/110	336  uS	436.6uS	399.3uS	 19%
-	cc      -O2	Sun 3/110	360  uS	532.4uS	505.1uS	 40%
-	cc      -O4	Sun 3/110	365  uS	532.3uS	5