Merge tag 'for-linus-4.9-2' of git://git.code.sf.net/p/openipmi/linux-ipmi
Pull IPMI updates from Corey Minyard:
"A small bug fix and a new driver for acting as an IPMI device.
I was on vacation during the merge window (a long vacation) but this
is a bug fix that should go in and a new driver that shouldn't hurt
anything.
This has been in linux-next for a month or so"
* tag 'for-linus-4.9-2' of git://git.code.sf.net/p/openipmi/linux-ipmi:
ipmi: fix crash on reading version from proc after unregisted bmc
ipmi/bt-bmc: remove redundant return value check of platform_get_resource()
ipmi/bt-bmc: add a dependency on ARCH_ASPEED
ipmi: Fix ioremap error handling in bt-bmc
ipmi: add an Aspeed BT IPMI BMC driver
diff --git a/.gitattributes b/.gitattributes
new file mode 100644
index 0000000..89c411b
--- /dev/null
+++ b/.gitattributes
@@ -0,0 +1,2 @@
+*.c diff=cpp
+*.h diff=cpp
diff --git a/.mailmap b/.mailmap
index de22dae..02d2614 100644
--- a/.mailmap
+++ b/.mailmap
@@ -69,11 +69,14 @@
James Bottomley <jejb@titanic.il.steeleye.com>
James E Wilson <wilson@specifix.com>
James Ketrenos <jketreno@io.(none)>
+Javi Merino <javi.merino@kernel.org> <javi.merino@arm.com>
<javier@osg.samsung.com> <javier.martinez@collabora.co.uk>
Jean Tourrilhes <jt@hpl.hp.com>
Jeff Garzik <jgarzik@pretzel.yyz.us>
Jens Axboe <axboe@suse.de>
Jens Osterkamp <Jens.Osterkamp@de.ibm.com>
+Johan Hovold <johan@kernel.org> <jhovold@gmail.com>
+Johan Hovold <johan@kernel.org> <johan@hovoldconsulting.com>
John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
John Stultz <johnstul@us.ibm.com>
<josh@joshtriplett.org> <josh@freedesktop.org>
@@ -124,6 +127,7 @@
Peter Oruba <peter.oruba@amd.com>
Pratyush Anand <pratyush.anand@gmail.com> <pratyush.anand@st.com>
Praveen BP <praveenbp@ti.com>
+Qais Yousef <qsyousef@gmail.com> <qais.yousef@imgtec.com>
Rajesh Shah <rajesh.shah@intel.com>
Ralf Baechle <ralf@linux-mips.org>
Ralf Wildenhues <Ralf.Wildenhues@gmx.de>
@@ -159,6 +163,7 @@
Viresh Kumar <vireshk@kernel.org> <viresh.kumar@st.com>
Viresh Kumar <vireshk@kernel.org> <viresh.linux@gmail.com>
Viresh Kumar <vireshk@kernel.org> <viresh.kumar2@arm.com>
+Vlad Dogaru <ddvlad@gmail.com> <vlad.dogaru@intel.com>
Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@virtuozzo.com>
Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@parallels.com>
Takashi YOSHII <takashi.yoshii.zj@renesas.com>
diff --git a/CREDITS b/CREDITS
index 2a3fbcd..513aaa3 100644
--- a/CREDITS
+++ b/CREDITS
@@ -1090,6 +1090,10 @@
S: Pleasanton, CA 94588
S: USA
+N: Dmitry Eremin-Solenikov
+E: dbaryshkov@gmail.com
+D: Power Supply Maintainer from v3.14 - v3.15
+
N: Doug Evans
E: dje@cygnus.com
D: Wrote Xenix FS (part of standard kernel since 0.99.15)
@@ -1944,6 +1948,11 @@
E: kraxel@suse.de
D: video4linux, bttv, vesafb, some scsi, misc fixes
+N: Hans J. Koch
+D: USERSPACE I/O, MAX6650
+D: Hans passed away in June 2016, and will be greatly missed.
+W: https://lwn.net/Articles/691000/
+
N: Harald Koenig
E: koenig@tat.physik.uni-tuebingen.de
D: XFree86 (S3), DCF77, some kernel hacks and fixes
@@ -2287,11 +2296,11 @@
N: Pavel Machek
E: pavel@ucw.cz
-D: Softcursor for vga, hypertech cdrom support, vcsa bugfix, nbd
+P: 4096R/92DFCE96 4FA7 9EEF FCD4 C44F C585 B8C7 C060 2241 92DF CE96
+D: Softcursor for vga, hypertech cdrom support, vcsa bugfix, nbd,
D: sun4/330 port, capabilities for elf, speedup for rm on ext2, USB,
-D: work on suspend-to-ram/disk, killing duplicates from ioctl32
-S: Volkova 1131
-S: 198 00 Praha 9
+D: work on suspend-to-ram/disk, killing duplicates from ioctl32,
+D: Altera SoCFPGA and Nokia N900 support.
S: Czech Republic
N: Paul Mackerras
@@ -3518,6 +3527,10 @@
S: Northborough, MA 01532
S: USA
+N: Doug Thompson
+E: dougthompson@xmission.com
+D: EDAC
+
N: Tommy Thorn
E: Tommy.Thorn@irisa.fr
W: http://www.irisa.fr/prive/thorn/index.html
@@ -3654,6 +3667,10 @@
S: 97078 Wuerzburg
S: Germany
+N: Jason Uhlenkott
+E: juhlenko@akamai.com
+D: I3000 EDAC driver
+
N: Greg Ungerer
E: gerg@snapgear.com
D: uClinux kernel hacker
@@ -3691,7 +3708,7 @@
N: Geert Uytterhoeven
E: geert@linux-m68k.org
W: http://users.telenet.be/geertu/
-P: 1024/862678A6 C51D 361C 0BD1 4C90 B275 C553 6EEA 11BA 8626 78A6
+P: 4096R/4804B4BC3F55EEFB 750D 82B0 A781 5431 5E25 925B 4804 B4BC 3F55 EEFB
D: m68k/Amiga and PPC/CHRP Longtrail coordinator
D: Frame buffer device and XF68_FBDev maintainer
D: m68k IDE maintainer
diff --git a/Documentation/00-INDEX b/Documentation/00-INDEX
index cb9a6c6..3acc4f1 100644
--- a/Documentation/00-INDEX
+++ b/Documentation/00-INDEX
@@ -46,7 +46,8 @@
Intel-IOMMU.txt
- basic info on the Intel IOMMU virtualization support.
Makefile
- - some files in Documentation dir are actually sample code to build
+ - This file does nothing. Removing it breaks make htmldocs and
+ make distclean.
ManagementStyle
- how to (attempt to) manage kernel hackers.
RCU/
diff --git a/Documentation/80211/cfg80211.rst b/Documentation/80211/cfg80211.rst
new file mode 100644
index 0000000..b1e149e
--- /dev/null
+++ b/Documentation/80211/cfg80211.rst
@@ -0,0 +1,345 @@
+==================
+cfg80211 subsystem
+==================
+
+Device registration
+===================
+
+.. kernel-doc:: include/net/cfg80211.h
+ :doc: Device registration
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_channel_flags
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_channel
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_rate_flags
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_rate
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_sta_ht_cap
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_supported_band
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_signal_type
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_params_flags
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_flags
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wireless_dev
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_new
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_register
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_unregister
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_free
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_name
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_dev
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_priv
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: priv_to_wiphy
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: set_wiphy_dev
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wdev_priv
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_iface_limit
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_iface_combination
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_check_combinations
+
+Actions and configuration
+=========================
+
+.. kernel-doc:: include/net/cfg80211.h
+ :doc: Actions and configuration
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_ops
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: vif_params
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: key_params
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: survey_info_flags
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: survey_info
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_beacon_data
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_ap_settings
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: station_parameters
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: rate_info_flags
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: rate_info
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: station_info
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: monitor_flags
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: mpath_info_flags
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: mpath_info
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: bss_parameters
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_txq_params
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_crypto_settings
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_auth_request
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_assoc_request
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_deauth_request
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_disassoc_request
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_ibss_params
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_connect_params
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_pmksa
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_rx_mlme_mgmt
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_auth_timeout
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_rx_assoc_resp
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_assoc_timeout
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_tx_mlme_mgmt
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_ibss_joined
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_connect_result
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_connect_bss
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_connect_timeout
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_roamed
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_disconnected
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_ready_on_channel
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_remain_on_channel_expired
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_new_sta
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_rx_mgmt
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_mgmt_tx_status
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_cqm_rssi_notify
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_cqm_pktloss_notify
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_michael_mic_failure
+
+Scanning and BSS list handling
+==============================
+
+.. kernel-doc:: include/net/cfg80211.h
+ :doc: Scanning and BSS list handling
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_ssid
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_scan_request
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_scan_done
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_bss
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_inform_bss
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_inform_bss_frame_data
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_inform_bss_data
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_unlink_bss
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_find_ie
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_bss_get_ie
+
+Utility functions
+=================
+
+.. kernel-doc:: include/net/cfg80211.h
+ :doc: Utility functions
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_channel_to_frequency
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_frequency_to_channel
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_get_channel
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_get_response_rate
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_hdrlen
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_get_hdrlen_from_skb
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_radiotap_iterator
+
+Data path helpers
+=================
+
+.. kernel-doc:: include/net/cfg80211.h
+ :doc: Data path helpers
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_data_to_8023
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_data_from_8023
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: ieee80211_amsdu_to_8023s
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_classify8021d
+
+Regulatory enforcement infrastructure
+=====================================
+
+.. kernel-doc:: include/net/cfg80211.h
+ :doc: Regulatory enforcement infrastructure
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: regulatory_hint
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_apply_custom_regulatory
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: freq_reg_info
+
+RFkill integration
+==================
+
+.. kernel-doc:: include/net/cfg80211.h
+ :doc: RFkill integration
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_rfkill_set_hw_state
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_rfkill_start_polling
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: wiphy_rfkill_stop_polling
+
+Test mode
+=========
+
+.. kernel-doc:: include/net/cfg80211.h
+ :doc: Test mode
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_testmode_alloc_reply_skb
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_testmode_reply
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_testmode_alloc_event_skb
+
+.. kernel-doc:: include/net/cfg80211.h
+ :functions: cfg80211_testmode_event
diff --git a/Documentation/80211/conf.py b/Documentation/80211/conf.py
new file mode 100644
index 0000000..20c7c27
--- /dev/null
+++ b/Documentation/80211/conf.py
@@ -0,0 +1,5 @@
+# -*- coding: utf-8; mode: python -*-
+
+project = "Linux 802.11 Driver Developer's Guide"
+
+tags.add("subproject")
diff --git a/Documentation/80211/index.rst b/Documentation/80211/index.rst
new file mode 100644
index 0000000..90bba47
--- /dev/null
+++ b/Documentation/80211/index.rst
@@ -0,0 +1,17 @@
+=====================================
+Linux 802.11 Driver Developer's Guide
+=====================================
+
+.. toctree::
+
+ introduction
+ cfg80211
+ mac80211
+ mac80211-advanced
+
+.. only:: subproject
+
+ Indices
+ =======
+
+ * :ref:`genindex`
diff --git a/Documentation/80211/introduction.rst b/Documentation/80211/introduction.rst
new file mode 100644
index 0000000..4938fa8
--- /dev/null
+++ b/Documentation/80211/introduction.rst
@@ -0,0 +1,17 @@
+============
+Introduction
+============
+
+Explaining wireless 802.11 networking in the Linux kernel
+
+Copyright 2007-2009 Johannes Berg
+
+These books attempt to give a description of the various subsystems
+that play a role in 802.11 wireless networking in Linux. Since these
+books are for kernel developers they attempts to document the
+structures and functions used in the kernel as well as giving a
+higher-level overview.
+
+The reader is expected to be familiar with the 802.11 standard as
+published by the IEEE in 802.11-2007 (or possibly later versions).
+References to this standard will be given as "802.11-2007 8.1.5".
diff --git a/Documentation/80211/mac80211-advanced.rst b/Documentation/80211/mac80211-advanced.rst
new file mode 100644
index 0000000..70a89b2
--- /dev/null
+++ b/Documentation/80211/mac80211-advanced.rst
@@ -0,0 +1,295 @@
+=============================
+mac80211 subsystem (advanced)
+=============================
+
+Information contained within this part of the book is of interest only
+for advanced interaction of mac80211 with drivers to exploit more
+hardware capabilities and improve performance.
+
+LED support
+===========
+
+Mac80211 supports various ways of blinking LEDs. Wherever possible,
+device LEDs should be exposed as LED class devices and hooked up to the
+appropriate trigger, which will then be triggered appropriately by
+mac80211.
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_get_tx_led_name
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_get_rx_led_name
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_get_assoc_led_name
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_get_radio_led_name
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_tpt_blink
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_tpt_led_trigger_flags
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_create_tpt_led_trigger
+
+Hardware crypto acceleration
+============================
+
+.. kernel-doc:: include/net/mac80211.h
+ :doc: Hardware crypto acceleration
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: set_key_cmd
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_key_conf
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_key_flags
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_get_tkip_p1k
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_get_tkip_p1k_iv
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_get_tkip_p2k
+
+Powersave support
+=================
+
+.. kernel-doc:: include/net/mac80211.h
+ :doc: Powersave support
+
+Beacon filter support
+=====================
+
+.. kernel-doc:: include/net/mac80211.h
+ :doc: Beacon filter support
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_beacon_loss
+
+Multiple queues and QoS support
+===============================
+
+TBD
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_tx_queue_params
+
+Access point mode support
+=========================
+
+TBD
+
+Some parts of the if_conf should be discussed here instead
+
+Insert notes about VLAN interfaces with hw crypto here or in the hw
+crypto chapter.
+
+support for powersaving clients
+-------------------------------
+
+.. kernel-doc:: include/net/mac80211.h
+ :doc: AP support for powersaving clients
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_get_buffered_bc
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_beacon_get
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_sta_eosp
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_frame_release_type
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_sta_ps_transition
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_sta_ps_transition_ni
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_sta_set_buffered
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_sta_block_awake
+
+Supporting multiple virtual interfaces
+======================================
+
+TBD
+
+Note: WDS with identical MAC address should almost always be OK
+
+Insert notes about having multiple virtual interfaces with different MAC
+addresses here, note which configurations are supported by mac80211, add
+notes about supporting hw crypto with it.
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_iterate_active_interfaces
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_iterate_active_interfaces_atomic
+
+Station handling
+================
+
+TODO
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_sta
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: sta_notify_cmd
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_find_sta
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_find_sta_by_ifaddr
+
+Hardware scan offload
+=====================
+
+TBD
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_scan_completed
+
+Aggregation
+===========
+
+TX A-MPDU aggregation
+---------------------
+
+.. kernel-doc:: net/mac80211/agg-tx.c
+ :doc: TX A-MPDU aggregation
+
+.. WARNING: DOCPROC directive not supported: !Cnet/mac80211/agg-tx.c
+
+RX A-MPDU aggregation
+---------------------
+
+.. kernel-doc:: net/mac80211/agg-rx.c
+ :doc: RX A-MPDU aggregation
+
+.. WARNING: DOCPROC directive not supported: !Cnet/mac80211/agg-rx.c
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_ampdu_mlme_action
+
+Spatial Multiplexing Powersave (SMPS)
+=====================================
+
+.. kernel-doc:: include/net/mac80211.h
+ :doc: Spatial multiplexing power save
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_request_smps
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_smps_mode
+
+TBD
+
+This part of the book describes the rate control algorithm interface and
+how it relates to mac80211 and drivers.
+
+Rate Control API
+================
+
+TBD
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_start_tx_ba_session
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_start_tx_ba_cb_irqsafe
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_stop_tx_ba_session
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_stop_tx_ba_cb_irqsafe
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_rate_control_changed
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_tx_rate_control
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: rate_control_send_low
+
+TBD
+
+This part of the book describes mac80211 internals.
+
+Key handling
+============
+
+Key handling basics
+-------------------
+
+.. kernel-doc:: net/mac80211/key.c
+ :doc: Key handling basics
+
+MORE TBD
+--------
+
+TBD
+
+Receive processing
+==================
+
+TBD
+
+Transmit processing
+===================
+
+TBD
+
+Station info handling
+=====================
+
+Programming information
+-----------------------
+
+.. kernel-doc:: net/mac80211/sta_info.h
+ :functions: sta_info
+
+.. kernel-doc:: net/mac80211/sta_info.h
+ :functions: ieee80211_sta_info_flags
+
+STA information lifetime rules
+------------------------------
+
+.. kernel-doc:: net/mac80211/sta_info.c
+ :doc: STA information lifetime rules
+
+Aggregation
+===========
+
+.. kernel-doc:: net/mac80211/sta_info.h
+ :functions: sta_ampdu_mlme
+
+.. kernel-doc:: net/mac80211/sta_info.h
+ :functions: tid_ampdu_tx
+
+.. kernel-doc:: net/mac80211/sta_info.h
+ :functions: tid_ampdu_rx
+
+Synchronisation
+===============
+
+TBD
+
+Locking, lots of RCU
diff --git a/Documentation/80211/mac80211.rst b/Documentation/80211/mac80211.rst
new file mode 100644
index 0000000..85a8335
--- /dev/null
+++ b/Documentation/80211/mac80211.rst
@@ -0,0 +1,216 @@
+===========================
+mac80211 subsystem (basics)
+===========================
+
+You should read and understand the information contained within this
+part of the book while implementing a mac80211 driver. In some chapters,
+advanced usage is noted, those may be skipped if this isn't needed.
+
+This part of the book only covers station and monitor mode
+functionality, additional information required to implement the other
+modes is covered in the second part of the book.
+
+Basic hardware handling
+=======================
+
+TBD
+
+This chapter shall contain information on getting a hw struct allocated
+and registered with mac80211.
+
+Since it is required to allocate rates/modes before registering a hw
+struct, this chapter shall also contain information on setting up the
+rate/mode structs.
+
+Additionally, some discussion about the callbacks and the general
+programming model should be in here, including the definition of
+ieee80211_ops which will be referred to a lot.
+
+Finally, a discussion of hardware capabilities should be done with
+references to other parts of the book.
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_hw
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_hw_flags
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: SET_IEEE80211_DEV
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: SET_IEEE80211_PERM_ADDR
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_ops
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_alloc_hw
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_register_hw
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_unregister_hw
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_free_hw
+
+PHY configuration
+=================
+
+TBD
+
+This chapter should describe PHY handling including start/stop callbacks
+and the various structures used.
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_conf
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_conf_flags
+
+Virtual interfaces
+==================
+
+TBD
+
+This chapter should describe virtual interface basics that are relevant
+to the driver (VLANs, MGMT etc are not.) It should explain the use of
+the add_iface/remove_iface callbacks as well as the interface
+configuration callbacks.
+
+Things related to AP mode should be discussed there.
+
+Things related to supporting multiple interfaces should be in the
+appropriate chapter, a BIG FAT note should be here about this though and
+the recommendation to allow only a single interface in STA mode at
+first!
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_vif
+
+Receive and transmit processing
+===============================
+
+what should be here
+-------------------
+
+TBD
+
+This should describe the receive and transmit paths in mac80211/the
+drivers as well as transmit status handling.
+
+Frame format
+------------
+
+.. kernel-doc:: include/net/mac80211.h
+ :doc: Frame format
+
+Packet alignment
+----------------
+
+.. kernel-doc:: net/mac80211/rx.c
+ :doc: Packet alignment
+
+Calling into mac80211 from interrupts
+-------------------------------------
+
+.. kernel-doc:: include/net/mac80211.h
+ :doc: Calling mac80211 from interrupts
+
+functions/definitions
+---------------------
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_rx_status
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: mac80211_rx_flags
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: mac80211_tx_info_flags
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: mac80211_tx_control_flags
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: mac80211_rate_control_flags
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_tx_rate
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_tx_info
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_tx_info_clear_status
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_rx
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_rx_ni
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_rx_irqsafe
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_tx_status
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_tx_status_ni
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_tx_status_irqsafe
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_rts_get
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_rts_duration
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_ctstoself_get
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_ctstoself_duration
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_generic_frame_duration
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_wake_queue
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_stop_queue
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_wake_queues
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_stop_queues
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_queue_stopped
+
+Frame filtering
+===============
+
+.. kernel-doc:: include/net/mac80211.h
+ :doc: Frame filtering
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_filter_flags
+
+The mac80211 workqueue
+======================
+
+.. kernel-doc:: include/net/mac80211.h
+ :doc: mac80211 workqueue
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_queue_work
+
+.. kernel-doc:: include/net/mac80211.h
+ :functions: ieee80211_queue_delayed_work
diff --git a/Documentation/ABI/testing/sysfs-bus-rbd b/Documentation/ABI/testing/sysfs-bus-rbd
index 2ddd680..f208ac5 100644
--- a/Documentation/ABI/testing/sysfs-bus-rbd
+++ b/Documentation/ABI/testing/sysfs-bus-rbd
@@ -6,7 +6,7 @@
Being used for adding and removing rbd block devices.
-Usage: <mon ip addr> <options> <pool name> <rbd image name> [snap name]
+Usage: <mon ip addr> <options> <pool name> <rbd image name> [<snap name>]
$ echo "192.168.0.1 name=admin rbd foo" > /sys/bus/rbd/add
@@ -14,9 +14,13 @@
will be assigned for any registered block device. If snapshot is used, it will
be mapped read-only.
-Removal of a device:
+Usage: <dev-id> [force]
- $ echo <dev-id> > /sys/bus/rbd/remove
+ $ echo 2 > /sys/bus/rbd/remove
+
+Optional "force" argument which when passed will wait for running requests and
+then unmap the image. Requests sent to the driver after initiating the removal
+will be failed. (August 2016, since 4.9.)
What: /sys/bus/rbd/add_single_major
Date: December 2013
@@ -43,10 +47,25 @@
Entries under /sys/bus/rbd/devices/<dev-id>/
--------------------------------------------
+client_addr
+
+ The ceph unique client entity_addr_t (address + nonce).
+ The format is <address>:<port>/<nonce>: '1.2.3.4:1234/5678' or
+ '[1:2:3:4:5:6:7:8]:1234/5678'. (August 2016, since 4.9.)
+
client_id
The ceph unique client id that was assigned for this specific session.
+cluster_fsid
+
+ The ceph cluster UUID. (August 2016, since 4.9.)
+
+config_info
+
+ The string written into /sys/bus/rbd/add{,_single_major}. (August
+ 2016, since 4.9.)
+
features
A hexadecimal encoding of the feature bits for this image.
@@ -92,6 +111,10 @@
The current snapshot for which the device is mapped.
+snap_id
+
+ The current snapshot's id. (August 2016, since 4.9.)
+
parent
Information identifying the chain of parent images in a layered rbd
diff --git a/Documentation/ABI/testing/sysfs-class-cxl b/Documentation/ABI/testing/sysfs-class-cxl
index 4ba0a2a..640f65e 100644
--- a/Documentation/ABI/testing/sysfs-class-cxl
+++ b/Documentation/ABI/testing/sysfs-class-cxl
@@ -220,8 +220,11 @@
Date: October 2014
Contact: linuxppc-dev@lists.ozlabs.org
Description: write only
- Writing 1 will issue a PERST to card which may cause the card
- to reload the FPGA depending on load_image_on_perst.
+ Writing 1 will issue a PERST to card provided there are no
+ contexts active on any one of the card AFUs. This may cause
+ the card to reload the FPGA depending on load_image_on_perst.
+ Writing -1 will do a force PERST irrespective of any active
+ contexts on the card AFUs.
Users: https://github.com/ibm-capi/libcxl
What: /sys/class/cxl/<card>/perst_reloads_same_image (not in a guest)
diff --git a/Documentation/ABI/testing/sysfs-class-led b/Documentation/ABI/testing/sysfs-class-led
index 3646ec8..86ace28 100644
--- a/Documentation/ABI/testing/sysfs-class-led
+++ b/Documentation/ABI/testing/sysfs-class-led
@@ -24,7 +24,8 @@
of led events.
You can change triggers in a similar manner to the way an IO
scheduler is chosen. Trigger specific parameters can appear in
- /sys/class/leds/<led> once a given trigger is selected.
+ /sys/class/leds/<led> once a given trigger is selected. For
+ their documentation see sysfs-class-led-trigger-*.
What: /sys/class/leds/<led>/inverted
Date: January 2011
diff --git a/Documentation/ABI/testing/sysfs-class-led-trigger-oneshot b/Documentation/ABI/testing/sysfs-class-led-trigger-oneshot
new file mode 100644
index 0000000..378a3a4
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-class-led-trigger-oneshot
@@ -0,0 +1,36 @@
+What: /sys/class/leds/<led>/delay_on
+Date: Jun 2012
+KernelVersion: 3.6
+Contact: linux-leds@vger.kernel.org
+Description:
+ Specifies for how many milliseconds the LED has to stay at
+ LED_FULL brightness after it has been armed.
+ Defaults to 100 ms.
+
+What: /sys/class/leds/<led>/delay_off
+Date: Jun 2012
+KernelVersion: 3.6
+Contact: linux-leds@vger.kernel.org
+Description:
+ Specifies for how many milliseconds the LED has to stay at
+ LED_OFF brightness after it has been armed.
+ Defaults to 100 ms.
+
+What: /sys/class/leds/<led>/invert
+Date: Jun 2012
+KernelVersion: 3.6
+Contact: linux-leds@vger.kernel.org
+Description:
+ Reverse the blink logic. If set to 0 (default) blink on for
+ delay_on ms, then blink off for delay_off ms, leaving the LED
+ normally off. If set to 1, blink off for delay_off ms, then
+ blink on for delay_on ms, leaving the LED normally on.
+ Setting this value also immediately changes the LED state.
+
+What: /sys/class/leds/<led>/shot
+Date: Jun 2012
+KernelVersion: 3.6
+Contact: linux-leds@vger.kernel.org
+Description:
+ Write any non-empty string to signal an events, this starts a
+ blink sequence if not already running.
diff --git a/Documentation/ABI/testing/sysfs-class-led-trigger-usbport b/Documentation/ABI/testing/sysfs-class-led-trigger-usbport
new file mode 100644
index 0000000..f440e69
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-class-led-trigger-usbport
@@ -0,0 +1,12 @@
+What: /sys/class/leds/<led>/ports/<port>
+Date: September 2016
+KernelVersion: 4.9
+Contact: linux-leds@vger.kernel.org
+ linux-usb@vger.kernel.org
+Description:
+ Every dir entry represents a single USB port that can be
+ selected for the USB port trigger. Selecting ports makes trigger
+ observing them for any connected devices and lighting on LED if
+ there are any.
+ Echoing "1" value selects USB port. Echoing "0" unselects it.
+ Current state can be also read.
diff --git a/Documentation/ABI/testing/sysfs-class-mic.txt b/Documentation/ABI/testing/sysfs-class-mic.txt
index d45eed2..6ef6826 100644
--- a/Documentation/ABI/testing/sysfs-class-mic.txt
+++ b/Documentation/ABI/testing/sysfs-class-mic.txt
@@ -153,7 +153,7 @@
What: /sys/class/mic/mic(x)/heartbeat_enable
Date: March 2015
-KernelVersion: 3.20
+KernelVersion: 4.4
Contact: Ashutosh Dixit <ashutosh.dixit@intel.com>
Description:
The MIC drivers detect and inform user space about card crashes
diff --git a/Documentation/ABI/testing/sysfs-class-power b/Documentation/ABI/testing/sysfs-class-power
index fa05719..f85ce9e 100644
--- a/Documentation/ABI/testing/sysfs-class-power
+++ b/Documentation/ABI/testing/sysfs-class-power
@@ -22,7 +22,7 @@
What: /sys/class/power_supply/max14577-charger/device/fast_charge_timer
Date: October 2014
KernelVersion: 3.18.0
-Contact: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Contact: Krzysztof Kozlowski <krzk@kernel.org>
Description:
This entry shows and sets the maximum time the max14577
charger operates in fast-charge mode. When the timer expires
@@ -36,7 +36,7 @@
What: /sys/class/power_supply/max77693-charger/device/fast_charge_timer
Date: January 2015
KernelVersion: 3.19.0
-Contact: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Contact: Krzysztof Kozlowski <krzk@kernel.org>
Description:
This entry shows and sets the maximum time the max77693
charger operates in fast-charge mode. When the timer expires
@@ -50,7 +50,7 @@
What: /sys/class/power_supply/max77693-charger/device/top_off_threshold_current
Date: January 2015
KernelVersion: 3.19.0
-Contact: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Contact: Krzysztof Kozlowski <krzk@kernel.org>
Description:
This entry shows and sets the charging current threshold for
entering top-off charging mode. When charging current in fast
@@ -65,7 +65,7 @@
What: /sys/class/power_supply/max77693-charger/device/top_off_timer
Date: January 2015
KernelVersion: 3.19.0
-Contact: Krzysztof Kozlowski <k.kozlowski@samsung.com>
+Contact: Krzysztof Kozlowski <krzk@kernel.org>
Description:
This entry shows and sets the maximum time the max77693
charger operates in top-off charge mode. When the timer expires
diff --git a/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff b/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff
index db197a8..305dffd 100644
--- a/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff
+++ b/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff
@@ -35,6 +35,12 @@
DF-EX <*--------> G25 <-> G27
DF-EX <*----------------> G27
+ G29:
+ DF-EX <*> DFP <-> G25 <-> G27 <-> G29
+ DF-EX <*--------> G25 <-> G27 <-> G29
+ DF-EX <*----------------> G27 <-> G29
+ DF-EX <*------------------------> G29
+
DFGT:
DF-EX <*> DFP <-> DFGT
DF-EX <*--------> DFGT
@@ -50,3 +56,12 @@
alternate mode the wheel might be switched to.
It is a read-only value.
This entry is not created for devices that have only one mode.
+
+What: /sys/bus/hid/drivers/logitech/<dev>/combine_pedals
+Date: Sep 2016
+KernelVersion: 4.9
+Contact: Simon Wood <simon@mungewell.org>
+Description: Controls whether a combined value of accelerator and brake is
+ reported on the Y axis of the controller. Useful for older games
+ which can do not work with separate accelerator/brake axis.
+ Off ('0') by default, enabled by setting '1'.
diff --git a/Documentation/ABI/testing/sysfs-driver-wacom b/Documentation/ABI/testing/sysfs-driver-wacom
index dca4293..2aa5503 100644
--- a/Documentation/ABI/testing/sysfs-driver-wacom
+++ b/Documentation/ABI/testing/sysfs-driver-wacom
@@ -24,6 +24,7 @@
Date: August 2014
Contact: linux-input@vger.kernel.org
Description:
+ <obsoleted by the LED class API now exported by the driver>
Writing to this file sets the status LED luminance (1..127)
when the stylus does not touch the tablet surface, and no
button is pressed on the stylus. This luminance level is
@@ -33,6 +34,7 @@
Date: August 2014
Contact: linux-input@vger.kernel.org
Description:
+ <obsoleted by the LED class API now exported by the driver>
Writing to this file sets the status LED luminance (1..127)
when the stylus touches the tablet surface, or any button is
pressed on the stylus.
@@ -41,6 +43,7 @@
Date: August 2014
Contact: linux-input@vger.kernel.org
Description:
+ <obsoleted by the LED class API now exported by the driver>
Writing to this file sets which one of the four (for Intuos 4
and Intuos 5) or of the right four (for Cintiq 21UX2 and Cintiq
24HD) status LEDs is active (0..3). The other three LEDs on the
@@ -50,6 +53,7 @@
Date: August 2014
Contact: linux-input@vger.kernel.org
Description:
+ <obsoleted by the LED class API now exported by the driver>
Writing to this file sets which one of the left four (for Cintiq 21UX2
and Cintiq 24HD) status LEDs is active (0..3). The other three LEDs on
the left are always inactive.
@@ -91,6 +95,7 @@
Date: July 2015
Contact: linux-input@vger.kernel.org
Description:
+ <obsoleted by the LED class API now exported by the driver>
Reading from this file reports the mode status of the
remote as indicated by the LED lights on the device. If no
reports have been received from the paired device, reading
diff --git a/Documentation/ABI/testing/sysfs-i2c-bmp085 b/Documentation/ABI/testing/sysfs-i2c-bmp085
deleted file mode 100644
index 585962a..0000000
--- a/Documentation/ABI/testing/sysfs-i2c-bmp085
+++ /dev/null
@@ -1,31 +0,0 @@
-What: /sys/bus/i2c/devices/<busnum>-<devaddr>/pressure0_input
-Date: June 2010
-Contact: Christoph Mair <christoph.mair@gmail.com>
-Description: Start a pressure measurement and read the result. Values
- represent the ambient air pressure in pascal (0.01 millibar).
-
- Reading: returns the current air pressure.
-
-
-What: /sys/bus/i2c/devices/<busnum>-<devaddr>/temp0_input
-Date: June 2010
-Contact: Christoph Mair <christoph.mair@gmail.com>
-Description: Measure the ambient temperature. The returned value represents
- the ambient temperature in units of 0.1 degree celsius.
-
- Reading: returns the current temperature.
-
-
-What: /sys/bus/i2c/devices/<busnum>-<devaddr>/oversampling
-Date: June 2010
-Contact: Christoph Mair <christoph.mair@gmail.com>
-Description: Tell the bmp085 to use more samples to calculate a pressure
- value. When writing to this file the chip will use 2^x samples
- to calculate the next pressure value with x being the value
- written. Using this feature will decrease RMS noise and
- increase the measurement time.
-
- Reading: returns the current oversampling setting.
-
- Writing: sets a new oversampling setting.
- Accepted values: 0..3.
diff --git a/Documentation/ABI/testing/sysfs-kernel-irq b/Documentation/ABI/testing/sysfs-kernel-irq
new file mode 100644
index 0000000..eb074b1
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-kernel-irq
@@ -0,0 +1,53 @@
+What: /sys/kernel/irq
+Date: September 2016
+KernelVersion: 4.9
+Contact: Craig Gallek <kraig@google.com>
+Description: Directory containing information about the system's IRQs.
+ Specifically, data from the associated struct irq_desc.
+ The information here is similar to that in /proc/interrupts
+ but in a more machine-friendly format. This directory contains
+ one subdirectory for each Linux IRQ number.
+
+What: /sys/kernel/irq/<irq>/actions
+Date: September 2016
+KernelVersion: 4.9
+Contact: Craig Gallek <kraig@google.com>
+Description: The IRQ action chain. A comma-separated list of zero or more
+ device names associated with this interrupt.
+
+What: /sys/kernel/irq/<irq>/chip_name
+Date: September 2016
+KernelVersion: 4.9
+Contact: Craig Gallek <kraig@google.com>
+Description: Human-readable chip name supplied by the associated device
+ driver.
+
+What: /sys/kernel/irq/<irq>/hwirq
+Date: September 2016
+KernelVersion: 4.9
+Contact: Craig Gallek <kraig@google.com>
+Description: When interrupt translation domains are used, this file contains
+ the underlying hardware IRQ number used for this Linux IRQ.
+
+What: /sys/kernel/irq/<irq>/name
+Date: September 2016
+KernelVersion: 4.9
+Contact: Craig Gallek <kraig@google.com>
+Description: Human-readable flow handler name as defined by the irq chip
+ driver.
+
+What: /sys/kernel/irq/<irq>/per_cpu_count
+Date: September 2016
+KernelVersion: 4.9
+Contact: Craig Gallek <kraig@google.com>
+Description: The number of times the interrupt has fired since boot. This
+ is a comma-separated list of counters; one per CPU in CPU id
+ order. NOTE: This file consistently shows counters for all
+ CPU ids. This differs from the behavior of /proc/interrupts
+ which only shows counters for online CPUs.
+
+What: /sys/kernel/irq/<irq>/type
+Date: September 2016
+KernelVersion: 4.9
+Contact: Craig Gallek <kraig@google.com>
+Description: The type of the interrupt. Either the string 'level' or 'edge'.
diff --git a/Documentation/Changes b/Documentation/Changes
index ec97b77..22797a1 100644
--- a/Documentation/Changes
+++ b/Documentation/Changes
@@ -1,8 +1,13 @@
+.. _changes:
+
+Minimal requerements to compile the Kernel
+++++++++++++++++++++++++++++++++++++++++++
+
Intro
=====
This document is designed to provide a list of the minimum levels of
-software necessary to run the 3.0 kernels.
+software necessary to run the 4.x kernels.
This document is originally based on my "Changes" file for 2.0.x kernels
and therefore owes credit to the same people as that file (Jared Mauch,
@@ -10,9 +15,9 @@
'net).
Current Minimal Requirements
-============================
+****************************
-Upgrade to at *least* these software revisions before thinking you've
+Upgrade to at **least** these software revisions before thinking you've
encountered a bug! If you're unsure what version you're currently
running, the suggested command should tell you.
@@ -21,34 +26,40 @@
systems; obviously, if you don't have any ISDN hardware, for example,
you probably needn't concern yourself with isdn4k-utils.
-o GNU C 3.2 # gcc --version
-o GNU make 3.80 # make --version
-o binutils 2.12 # ld -v
-o util-linux 2.10o # fdformat --version
-o module-init-tools 0.9.10 # depmod -V
-o e2fsprogs 1.41.4 # e2fsck -V
-o jfsutils 1.1.3 # fsck.jfs -V
-o reiserfsprogs 3.6.3 # reiserfsck -V
-o xfsprogs 2.6.0 # xfs_db -V
-o squashfs-tools 4.0 # mksquashfs -version
-o btrfs-progs 0.18 # btrfsck
-o pcmciautils 004 # pccardctl -V
-o quota-tools 3.09 # quota -V
-o PPP 2.4.0 # pppd --version
-o isdn4k-utils 3.1pre1 # isdnctrl 2>&1|grep version
-o nfs-utils 1.0.5 # showmount --version
-o procps 3.2.0 # ps --version
-o oprofile 0.9 # oprofiled --version
-o udev 081 # udevd --version
-o grub 0.93 # grub --version || grub-install --version
-o mcelog 0.6 # mcelog --version
-o iptables 1.4.2 # iptables -V
-o openssl & libcrypto 1.0.0 # openssl version
-o bc 1.06.95 # bc --version
+====================== =============== ========================================
+ Program Minimal version Command to check the version
+====================== =============== ========================================
+GNU C 3.2 gcc --version
+GNU make 3.80 make --version
+binutils 2.12 ld -v
+util-linux 2.10o fdformat --version
+module-init-tools 0.9.10 depmod -V
+e2fsprogs 1.41.4 e2fsck -V
+jfsutils 1.1.3 fsck.jfs -V
+reiserfsprogs 3.6.3 reiserfsck -V
+xfsprogs 2.6.0 xfs_db -V
+squashfs-tools 4.0 mksquashfs -version
+btrfs-progs 0.18 btrfsck
+pcmciautils 004 pccardctl -V
+quota-tools 3.09 quota -V
+PPP 2.4.0 pppd --version
+isdn4k-utils 3.1pre1 isdnctrl 2>&1|grep version
+nfs-utils 1.0.5 showmount --version
+procps 3.2.0 ps --version
+oprofile 0.9 oprofiled --version
+udev 081 udevd --version
+grub 0.93 grub --version || grub-install --version
+mcelog 0.6 mcelog --version
+iptables 1.4.2 iptables -V
+openssl & libcrypto 1.0.0 openssl version
+bc 1.06.95 bc --version
+Sphinx\ [#f1]_ 1.2 sphinx-build --version
+====================== =============== ========================================
+.. [#f1] Sphinx is needed only to build the Kernel documentation
Kernel compilation
-==================
+******************
GCC
---
@@ -64,16 +75,16 @@
Binutils
--------
-Linux on IA-32 has recently switched from using as86 to using gas for
-assembling the 16-bit boot code, removing the need for as86 to compile
+Linux on IA-32 has recently switched from using ``as86`` to using ``gas`` for
+assembling the 16-bit boot code, removing the need for ``as86`` to compile
your kernel. This change does, however, mean that you need a recent
release of binutils.
Perl
----
-You will need perl 5 and the following modules: Getopt::Long, Getopt::Std,
-File::Basename, and File::Find to build the kernel.
+You will need perl 5 and the following modules: ``Getopt::Long``,
+``Getopt::Std``, ``File::Basename``, and ``File::Find`` to build the kernel.
BC
--
@@ -93,7 +104,7 @@
System utilities
-================
+****************
Architectural changes
---------------------
@@ -115,7 +126,7 @@
Util-linux
----------
-New versions of util-linux provide *fdisk support for larger disks,
+New versions of util-linux provide ``fdisk`` support for larger disks,
support new options to mount, recognize more supported partition
types, have a fdformat which works with 2.4 kernels, and similar goodies.
You'll probably want to upgrade.
@@ -125,54 +136,57 @@
If the unthinkable happens and your kernel oopses, you may need the
ksymoops tool to decode it, but in most cases you don't.
-It is generally preferred to build the kernel with CONFIG_KALLSYMS so
+It is generally preferred to build the kernel with ``CONFIG_KALLSYMS`` so
that it produces readable dumps that can be used as-is (this also
produces better output than ksymoops). If for some reason your kernel
-is not build with CONFIG_KALLSYMS and you have no way to rebuild and
+is not build with ``CONFIG_KALLSYMS`` and you have no way to rebuild and
reproduce the Oops with that option, then you can still decode that Oops
with ksymoops.
Module-Init-Tools
-----------------
-A new module loader is now in the kernel that requires module-init-tools
+A new module loader is now in the kernel that requires ``module-init-tools``
to use. It is backward compatible with the 2.4.x series kernels.
Mkinitrd
--------
-These changes to the /lib/modules file tree layout also require that
+These changes to the ``/lib/modules`` file tree layout also require that
mkinitrd be upgraded.
E2fsprogs
---------
-The latest version of e2fsprogs fixes several bugs in fsck and
+The latest version of ``e2fsprogs`` fixes several bugs in fsck and
debugfs. Obviously, it's a good idea to upgrade.
JFSutils
--------
-The jfsutils package contains the utilities for the file system.
+The ``jfsutils`` package contains the utilities for the file system.
The following utilities are available:
-o fsck.jfs - initiate replay of the transaction log, and check
+
+- ``fsck.jfs`` - initiate replay of the transaction log, and check
and repair a JFS formatted partition.
-o mkfs.jfs - create a JFS formatted partition.
-o other file system utilities are also available in this package.
+
+- ``mkfs.jfs`` - create a JFS formatted partition.
+
+- other file system utilities are also available in this package.
Reiserfsprogs
-------------
The reiserfsprogs package should be used for reiserfs-3.6.x
(Linux kernels 2.4.x). It is a combined package and contains working
-versions of mkreiserfs, resize_reiserfs, debugreiserfs and
-reiserfsck. These utils work on both i386 and alpha platforms.
+versions of ``mkreiserfs``, ``resize_reiserfs``, ``debugreiserfs`` and
+``reiserfsck``. These utils work on both i386 and alpha platforms.
Xfsprogs
--------
-The latest version of xfsprogs contains mkfs.xfs, xfs_db, and the
-xfs_repair utilities, among others, for the XFS filesystem. It is
+The latest version of ``xfsprogs`` contains ``mkfs.xfs``, ``xfs_db``, and the
+``xfs_repair`` utilities, among others, for the XFS filesystem. It is
architecture independent and any version from 2.0.0 onward should
work correctly with this version of the XFS kernel code (2.6.0 or
later is recommended, due to some significant improvements).
@@ -180,7 +194,7 @@
PCMCIAutils
-----------
-PCMCIAutils replaces pcmcia-cs. It properly sets up
+PCMCIAutils replaces ``pcmcia-cs``. It properly sets up
PCMCIA sockets at system startup and loads the appropriate modules
for 16-bit PCMCIA devices if the kernel is modularized and the hotplug
subsystem is used.
@@ -198,19 +212,20 @@
A driver has been added to allow updating of Intel IA32 microcode,
accessible as a normal (misc) character device. If you are not using
-udev you may need to:
+udev you may need to::
-mkdir /dev/cpu
-mknod /dev/cpu/microcode c 10 184
-chmod 0644 /dev/cpu/microcode
+ mkdir /dev/cpu
+ mknod /dev/cpu/microcode c 10 184
+ chmod 0644 /dev/cpu/microcode
as root before you can use this. You'll probably also want to
get the user-space microcode_ctl utility to use with this.
udev
----
-udev is a userspace application for populating /dev dynamically with
-only entries for devices actually present. udev replaces the basic
+
+``udev`` is a userspace application for populating ``/dev`` dynamically with
+only entries for devices actually present. ``udev`` replaces the basic
functionality of devfs, while allowing persistent device naming for
devices.
@@ -218,10 +233,10 @@
----
Needs libfuse 2.4.0 or later. Absolute minimum is 2.3.0 but mount
-options 'direct_io' and 'kernel_cache' won't work.
+options ``direct_io`` and ``kernel_cache`` won't work.
Networking
-==========
+**********
General changes
---------------
@@ -243,9 +258,9 @@
upgrade pppd to at least 2.4.0.
If you are not using udev, you must have the device file /dev/ppp
-which can be made by:
+which can be made by::
-mknod /dev/ppp c 108 0
+ mknod /dev/ppp c 108 0
as root.
@@ -260,22 +275,22 @@
In ancient (2.4 and earlier) kernels, the nfs server needed to know
about any client that expected to be able to access files via NFS. This
-information would be given to the kernel by "mountd" when the client
-mounted the filesystem, or by "exportfs" at system startup. exportfs
-would take information about active clients from /var/lib/nfs/rmtab.
+information would be given to the kernel by ``mountd`` when the client
+mounted the filesystem, or by ``exportfs`` at system startup. exportfs
+would take information about active clients from ``/var/lib/nfs/rmtab``.
This approach is quite fragile as it depends on rmtab being correct
which is not always easy, particularly when trying to implement
-fail-over. Even when the system is working well, rmtab suffers from
+fail-over. Even when the system is working well, ``rmtab`` suffers from
getting lots of old entries that never get removed.
With modern kernels we have the option of having the kernel tell mountd
when it gets a request from an unknown host, and mountd can give
appropriate export information to the kernel. This removes the
-dependency on rmtab and means that the kernel only needs to know about
+dependency on ``rmtab`` and means that the kernel only needs to know about
currently active clients.
-To enable this new functionality, you need to:
+To enable this new functionality, you need to::
mount -t nfsd nfsd /proc/fs/nfsd
@@ -287,8 +302,32 @@
------
On x86 kernels the mcelog utility is needed to process and log machine check
-events when CONFIG_X86_MCE is enabled. Machine check events are errors reported
-by the CPU. Processing them is strongly encouraged.
+events when ``CONFIG_X86_MCE`` is enabled. Machine check events are errors
+reported by the CPU. Processing them is strongly encouraged.
+
+Kernel documentation
+********************
+
+Sphinx
+------
+
+The ReST markups currently used by the Documentation/ files are meant to be
+built with ``Sphinx`` version 1.2 or upper. If you're desiring to build
+PDF outputs, it is recommended to use version 1.4.6.
+
+.. note::
+
+ Please notice that, for PDF and LaTeX output, you'll also need ``XeLaTeX``
+ version 3.14159265. Depending on the distribution, you may also need
+ to install a series of ``texlive`` packages that provide the minimal
+ set of functionalities required for ``XeLaTex`` to work.
+
+Other tools
+-----------
+
+In order to produce documentation from DocBook, you'll also need ``xmlto``.
+Please notice, however, that we're currently migrating all documents to use
+``Sphinx``.
Getting updated software
========================
@@ -298,114 +337,149 @@
gcc
---
-o <ftp://ftp.gnu.org/gnu/gcc/>
+
+- <ftp://ftp.gnu.org/gnu/gcc/>
Make
----
-o <ftp://ftp.gnu.org/gnu/make/>
+
+- <ftp://ftp.gnu.org/gnu/make/>
Binutils
--------
-o <ftp://ftp.kernel.org/pub/linux/devel/binutils/>
+
+- <ftp://ftp.kernel.org/pub/linux/devel/binutils/>
OpenSSL
-------
-o <https://www.openssl.org/>
+
+- <https://www.openssl.org/>
System utilities
****************
Util-linux
----------
-o <ftp://ftp.kernel.org/pub/linux/utils/util-linux/>
+
+- <ftp://ftp.kernel.org/pub/linux/utils/util-linux/>
Ksymoops
--------
-o <ftp://ftp.kernel.org/pub/linux/utils/kernel/ksymoops/v2.4/>
+
+- <ftp://ftp.kernel.org/pub/linux/utils/kernel/ksymoops/v2.4/>
Module-Init-Tools
-----------------
-o <ftp://ftp.kernel.org/pub/linux/kernel/people/rusty/modules/>
+
+- <ftp://ftp.kernel.org/pub/linux/kernel/people/rusty/modules/>
Mkinitrd
--------
-o <https://code.launchpad.net/initrd-tools/main>
+
+- <https://code.launchpad.net/initrd-tools/main>
E2fsprogs
---------
-o <http://prdownloads.sourceforge.net/e2fsprogs/e2fsprogs-1.29.tar.gz>
+
+- <http://prdownloads.sourceforge.net/e2fsprogs/e2fsprogs-1.29.tar.gz>
JFSutils
--------
-o <http://jfs.sourceforge.net/>
+
+- <http://jfs.sourceforge.net/>
Reiserfsprogs
-------------
-o <http://www.kernel.org/pub/linux/utils/fs/reiserfs/>
+
+- <http://www.kernel.org/pub/linux/utils/fs/reiserfs/>
Xfsprogs
--------
-o <ftp://oss.sgi.com/projects/xfs/>
+
+- <ftp://oss.sgi.com/projects/xfs/>
Pcmciautils
-----------
-o <ftp://ftp.kernel.org/pub/linux/utils/kernel/pcmcia/>
+
+- <ftp://ftp.kernel.org/pub/linux/utils/kernel/pcmcia/>
Quota-tools
-----------
-o <http://sourceforge.net/projects/linuxquota/>
+-----------
+
+- <http://sourceforge.net/projects/linuxquota/>
DocBook Stylesheets
-------------------
-o <http://sourceforge.net/projects/docbook/files/docbook-dsssl/>
+
+- <http://sourceforge.net/projects/docbook/files/docbook-dsssl/>
XMLTO XSLT Frontend
-------------------
-o <http://cyberelk.net/tim/xmlto/>
+
+- <http://cyberelk.net/tim/xmlto/>
Intel P6 microcode
------------------
-o <https://downloadcenter.intel.com/>
+
+- <https://downloadcenter.intel.com/>
udev
----
-o <http://www.freedesktop.org/software/systemd/man/udev.html>
+
+- <http://www.freedesktop.org/software/systemd/man/udev.html>
FUSE
----
-o <http://sourceforge.net/projects/fuse>
+
+- <http://sourceforge.net/projects/fuse>
mcelog
------
-o <http://www.mcelog.org/>
+
+- <http://www.mcelog.org/>
Networking
**********
PPP
---
-o <ftp://ftp.samba.org/pub/ppp/>
+
+- <ftp://ftp.samba.org/pub/ppp/>
Isdn4k-utils
------------
-o <ftp://ftp.isdn4linux.de/pub/isdn4linux/utils/>
+
+- <ftp://ftp.isdn4linux.de/pub/isdn4linux/utils/>
NFS-utils
---------
-o <http://sourceforge.net/project/showfiles.php?group_id=14>
+
+- <http://sourceforge.net/project/showfiles.php?group_id=14>
Iptables
--------
-o <http://www.iptables.org/downloads.html>
+
+- <http://www.iptables.org/downloads.html>
Ip-route2
---------
-o <https://www.kernel.org/pub/linux/utils/net/iproute2/>
+
+- <https://www.kernel.org/pub/linux/utils/net/iproute2/>
OProfile
--------
-o <http://oprofile.sf.net/download/>
+
+- <http://oprofile.sf.net/download/>
NFS-Utils
---------
-o <http://nfs.sourceforge.net/>
+
+- <http://nfs.sourceforge.net/>
+
+Kernel documentation
+********************
+
+Sphinx
+------
+
+- <http://www.sphinx-doc.org/>
diff --git a/Documentation/CodeOfConflict b/Documentation/CodeOfConflict
index 1684d0b..49a8ecc 100644
--- a/Documentation/CodeOfConflict
+++ b/Documentation/CodeOfConflict
@@ -19,7 +19,7 @@
will work to resolve the issue to the best of their ability. For more
information on who is on the Technical Advisory Board and what their
role is, please see:
- http://www.linuxfoundation.org/programs/advisory-councils/tab
+ http://www.linuxfoundation.org/projects/linux/tab
As a reviewer of code, please strive to keep things civil and focused on
the technical issues involved. We are all humans, and frustrations can
diff --git a/Documentation/CodingStyle b/Documentation/CodingStyle
index a096836..9c61c03 100644
--- a/Documentation/CodingStyle
+++ b/Documentation/CodingStyle
@@ -1,8 +1,10 @@
+.. _codingstyle:
- Linux kernel coding style
+Linux kernel coding style
+=========================
This is a short document describing the preferred coding style for the
-linux kernel. Coding style is very personal, and I won't _force_ my
+linux kernel. Coding style is very personal, and I won't **force** my
views on anybody, but this is what goes for anything that I have to be
able to maintain, and I'd prefer it for most other things too. Please
at least consider the points made here.
@@ -13,7 +15,8 @@
Anyway, here goes:
- Chapter 1: Indentation
+1) Indentation
+--------------
Tabs are 8 characters, and thus indentations are also 8 characters.
There are heretic movements that try to make indentations 4 (or even 2!)
@@ -36,8 +39,10 @@
Heed that warning.
The preferred way to ease multiple indentation levels in a switch statement is
-to align the "switch" and its subordinate "case" labels in the same column
-instead of "double-indenting" the "case" labels. E.g.:
+to align the ``switch`` and its subordinate ``case`` labels in the same column
+instead of ``double-indenting`` the ``case`` labels. E.g.:
+
+.. code-block:: c
switch (suffix) {
case 'G':
@@ -59,6 +64,8 @@
Don't put multiple statements on a single line unless you have
something to hide:
+.. code-block:: c
+
if (condition) do_this;
do_something_everytime;
@@ -71,7 +78,8 @@
Get a decent editor and don't leave whitespace at the end of lines.
- Chapter 2: Breaking long lines and strings
+2) Breaking long lines and strings
+----------------------------------
Coding style is all about readability and maintainability using commonly
available tools.
@@ -87,7 +95,8 @@
printk messages, because that breaks the ability to grep for them.
- Chapter 3: Placing Braces and Spaces
+3) Placing Braces and Spaces
+----------------------------
The other issue that always comes up in C styling is the placement of
braces. Unlike the indent size, there are few technical reasons to
@@ -95,6 +104,8 @@
shown to us by the prophets Kernighan and Ritchie, is to put the opening
brace last on the line, and put the closing brace first, thusly:
+.. code-block:: c
+
if (x is true) {
we do y
}
@@ -102,6 +113,8 @@
This applies to all non-function statement blocks (if, switch, for,
while, do). E.g.:
+.. code-block:: c
+
switch (action) {
case KOBJ_ADD:
return "add";
@@ -116,6 +129,8 @@
However, there is one special case, namely functions: they have the
opening brace at the beginning of the next line, thus:
+.. code-block:: c
+
int function(int x)
{
body of function
@@ -123,20 +138,24 @@
Heretic people all over the world have claimed that this inconsistency
is ... well ... inconsistent, but all right-thinking people know that
-(a) K&R are _right_ and (b) K&R are right. Besides, functions are
+(a) K&R are **right** and (b) K&R are right. Besides, functions are
special anyway (you can't nest them in C).
-Note that the closing brace is empty on a line of its own, _except_ in
+Note that the closing brace is empty on a line of its own, **except** in
the cases where it is followed by a continuation of the same statement,
-ie a "while" in a do-statement or an "else" in an if-statement, like
+ie a ``while`` in a do-statement or an ``else`` in an if-statement, like
this:
+.. code-block:: c
+
do {
body of do-loop
} while (condition);
and
+.. code-block:: c
+
if (x == y) {
..
} else if (x > y) {
@@ -155,11 +174,15 @@
Do not unnecessarily use braces where a single statement will do.
+.. code-block:: c
+
if (condition)
action();
and
+.. code-block:: none
+
if (condition)
do_this();
else
@@ -168,6 +191,8 @@
This does not apply if only one branch of a conditional statement is a single
statement; in the latter case use braces in both branches:
+.. code-block:: c
+
if (condition) {
do_this();
do_that();
@@ -175,57 +200,67 @@
otherwise();
}
- 3.1: Spaces
+3.1) Spaces
+***********
Linux kernel style for use of spaces depends (mostly) on
function-versus-keyword usage. Use a space after (most) keywords. The
notable exceptions are sizeof, typeof, alignof, and __attribute__, which look
somewhat like functions (and are usually used with parentheses in Linux,
-although they are not required in the language, as in: "sizeof info" after
-"struct fileinfo info;" is declared).
+although they are not required in the language, as in: ``sizeof info`` after
+``struct fileinfo info;`` is declared).
-So use a space after these keywords:
+So use a space after these keywords::
if, switch, case, for, do, while
but not with sizeof, typeof, alignof, or __attribute__. E.g.,
+.. code-block:: c
+
+
s = sizeof(struct file);
Do not add spaces around (inside) parenthesized expressions. This example is
-*bad*:
+**bad**:
+
+.. code-block:: c
+
s = sizeof( struct file );
When declaring pointer data or a function that returns a pointer type, the
-preferred use of '*' is adjacent to the data name or function name and not
+preferred use of ``*`` is adjacent to the data name or function name and not
adjacent to the type name. Examples:
+.. code-block:: c
+
+
char *linux_banner;
unsigned long long memparse(char *ptr, char **retptr);
char *match_strdup(substring_t *s);
Use one space around (on each side of) most binary and ternary operators,
-such as any of these:
+such as any of these::
= + - < > * / % | & ^ <= >= == != ? :
-but no space after unary operators:
+but no space after unary operators::
& * + - ~ ! sizeof typeof alignof __attribute__ defined
-no space before the postfix increment & decrement unary operators:
+no space before the postfix increment & decrement unary operators::
++ --
-no space after the prefix increment & decrement unary operators:
+no space after the prefix increment & decrement unary operators::
++ --
-and no space around the '.' and "->" structure member operators.
+and no space around the ``.`` and ``->`` structure member operators.
Do not leave trailing whitespace at the ends of lines. Some editors with
-"smart" indentation will insert whitespace at the beginning of new lines as
+``smart`` indentation will insert whitespace at the beginning of new lines as
appropriate, so you can start typing the next line of code right away.
However, some such editors do not remove the whitespace if you end up not
putting a line of code there, such as if you leave a blank line. As a result,
@@ -237,22 +272,23 @@
context lines.
- Chapter 4: Naming
+4) Naming
+---------
C is a Spartan language, and so should your naming be. Unlike Modula-2
and Pascal programmers, C programmers do not use cute names like
ThisVariableIsATemporaryCounter. A C programmer would call that
-variable "tmp", which is much easier to write, and not the least more
+variable ``tmp``, which is much easier to write, and not the least more
difficult to understand.
HOWEVER, while mixed-case names are frowned upon, descriptive names for
-global variables are a must. To call a global function "foo" is a
+global variables are a must. To call a global function ``foo`` is a
shooting offense.
-GLOBAL variables (to be used only if you _really_ need them) need to
+GLOBAL variables (to be used only if you **really** need them) need to
have descriptive names, as do global functions. If you have a function
that counts the number of active users, you should call that
-"count_active_users()" or similar, you should _not_ call it "cntusr()".
+``count_active_users()`` or similar, you should **not** call it ``cntusr()``.
Encoding the type of a function into the name (so-called Hungarian
notation) is brain damaged - the compiler knows the types anyway and can
@@ -260,9 +296,9 @@
makes buggy programs.
LOCAL variable names should be short, and to the point. If you have
-some random integer loop counter, it should probably be called "i".
-Calling it "loop_counter" is non-productive, if there is no chance of it
-being mis-understood. Similarly, "tmp" can be just about any type of
+some random integer loop counter, it should probably be called ``i``.
+Calling it ``loop_counter`` is non-productive, if there is no chance of it
+being mis-understood. Similarly, ``tmp`` can be just about any type of
variable that is used to hold a temporary value.
If you are afraid to mix up your local variable names, you have another
@@ -270,59 +306,69 @@
See chapter 6 (Functions).
- Chapter 5: Typedefs
+5) Typedefs
+-----------
-Please don't use things like "vps_t".
-It's a _mistake_ to use typedef for structures and pointers. When you see a
+Please don't use things like ``vps_t``.
+It's a **mistake** to use typedef for structures and pointers. When you see a
+
+.. code-block:: c
+
vps_t a;
in the source, what does it mean?
In contrast, if it says
+.. code-block:: c
+
struct virtual_container *a;
-you can actually tell what "a" is.
+you can actually tell what ``a`` is.
-Lots of people think that typedefs "help readability". Not so. They are
+Lots of people think that typedefs ``help readability``. Not so. They are
useful only for:
- (a) totally opaque objects (where the typedef is actively used to _hide_
+ (a) totally opaque objects (where the typedef is actively used to **hide**
what the object is).
- Example: "pte_t" etc. opaque objects that you can only access using
+ Example: ``pte_t`` etc. opaque objects that you can only access using
the proper accessor functions.
- NOTE! Opaqueness and "accessor functions" are not good in themselves.
- The reason we have them for things like pte_t etc. is that there
- really is absolutely _zero_ portably accessible information there.
+ .. note::
- (b) Clear integer types, where the abstraction _helps_ avoid confusion
- whether it is "int" or "long".
+ Opaqueness and ``accessor functions`` are not good in themselves.
+ The reason we have them for things like pte_t etc. is that there
+ really is absolutely **zero** portably accessible information there.
+
+ (b) Clear integer types, where the abstraction **helps** avoid confusion
+ whether it is ``int`` or ``long``.
u8/u16/u32 are perfectly fine typedefs, although they fit into
category (d) better than here.
- NOTE! Again - there needs to be a _reason_ for this. If something is
- "unsigned long", then there's no reason to do
+ .. note::
+
+ Again - there needs to be a **reason** for this. If something is
+ ``unsigned long``, then there's no reason to do
typedef unsigned long myflags_t;
but if there is a clear reason for why it under certain circumstances
- might be an "unsigned int" and under other configurations might be
- "unsigned long", then by all means go ahead and use a typedef.
+ might be an ``unsigned int`` and under other configurations might be
+ ``unsigned long``, then by all means go ahead and use a typedef.
- (c) when you use sparse to literally create a _new_ type for
+ (c) when you use sparse to literally create a **new** type for
type-checking.
(d) New types which are identical to standard C99 types, in certain
exceptional circumstances.
Although it would only take a short amount of time for the eyes and
- brain to become accustomed to the standard types like 'uint32_t',
+ brain to become accustomed to the standard types like ``uint32_t``,
some people object to their use anyway.
- Therefore, the Linux-specific 'u8/u16/u32/u64' types and their
+ Therefore, the Linux-specific ``u8/u16/u32/u64`` types and their
signed equivalents which are identical to standard types are
permitted -- although they are not mandatory in new code of your
own.
@@ -333,7 +379,7 @@
(e) Types safe for use in userspace.
In certain structures which are visible to userspace, we cannot
- require C99 types and cannot use the 'u32' form above. Thus, we
+ require C99 types and cannot use the ``u32`` form above. Thus, we
use __u32 and similar types in all structures which are shared
with userspace.
@@ -341,10 +387,11 @@
EVER use a typedef unless you can clearly match one of those rules.
In general, a pointer, or a struct that has elements that can reasonably
-be directly accessed should _never_ be a typedef.
+be directly accessed should **never** be a typedef.
- Chapter 6: Functions
+6) Functions
+------------
Functions should be short and sweet, and do just one thing. They should
fit on one or two screenfuls of text (the ISO/ANSI screen size is 80x24,
@@ -372,8 +419,10 @@
to understand what you did 2 weeks from now.
In source files, separate functions with one blank line. If the function is
-exported, the EXPORT* macro for it should follow immediately after the closing
-function brace line. E.g.:
+exported, the **EXPORT** macro for it should follow immediately after the
+closing function brace line. E.g.:
+
+.. code-block:: c
int system_is_up(void)
{
@@ -386,7 +435,8 @@
because it is a simple way to add valuable information for the reader.
- Chapter 7: Centralized exiting of functions
+7) Centralized exiting of functions
+-----------------------------------
Albeit deprecated by some people, the equivalent of the goto statement is
used frequently by compilers in form of the unconditional jump instruction.
@@ -396,18 +446,21 @@
cleanup needed then just return directly.
Choose label names which say what the goto does or why the goto exists. An
-example of a good name could be "out_buffer:" if the goto frees "buffer". Avoid
-using GW-BASIC names like "err1:" and "err2:". Also don't name them after the
-goto location like "err_kmalloc_failed:"
+example of a good name could be ``out_free_buffer:`` if the goto frees ``buffer``.
+Avoid using GW-BASIC names like ``err1:`` and ``err2:``, as you would have to
+renumber them if you ever add or remove exit paths, and they make correctness
+difficult to verify anyway.
The rationale for using gotos is:
- unconditional statements are easier to understand and follow
- nesting is reduced
- errors by not updating individual exit points when making
- modifications are prevented
+ modifications are prevented
- saves the compiler work to optimize redundant code away ;)
+.. code-block:: c
+
int fun(int a)
{
int result = 0;
@@ -425,27 +478,41 @@
goto out_buffer;
}
...
- out_buffer:
+ out_free_buffer:
kfree(buffer);
return result;
}
-A common type of bug to be aware of is "one err bugs" which look like this:
+A common type of bug to be aware of is ``one err bugs`` which look like this:
+
+.. code-block:: c
err:
kfree(foo->bar);
kfree(foo);
return ret;
-The bug in this code is that on some exit paths "foo" is NULL. Normally the
-fix for this is to split it up into two error labels "err_bar:" and "err_foo:".
+The bug in this code is that on some exit paths ``foo`` is NULL. Normally the
+fix for this is to split it up into two error labels ``err_free_bar:`` and
+``err_free_foo:``:
+
+.. code-block:: c
+
+ err_free_bar:
+ kfree(foo->bar);
+ err_free_foo:
+ kfree(foo);
+ return ret;
+
+Ideally you should simulate errors to test all exit paths.
- Chapter 8: Commenting
+8) Commenting
+-------------
Comments are good, but there is also a danger of over-commenting. NEVER
try to explain HOW your code works in a comment: it's much better to
-write the code so that the _working_ is obvious, and it's a waste of
+write the code so that the **working** is obvious, and it's a waste of
time to explain badly written code.
Generally, you want your comments to tell WHAT your code does, not HOW.
@@ -461,11 +528,10 @@
See the files Documentation/kernel-documentation.rst and scripts/kernel-doc
for details.
-Linux style for comments is the C89 "/* ... */" style.
-Don't use C99-style "// ..." comments.
-
The preferred style for long (multi-line) comments is:
+.. code-block:: c
+
/*
* This is the preferred style for multi-line
* comments in the Linux kernel source code.
@@ -478,6 +544,8 @@
For files in net/ and drivers/net/ the preferred style for long (multi-line)
comments is a little different.
+.. code-block:: c
+
/* The preferred comment style for files in net/ and drivers/net
* looks like this.
*
@@ -491,10 +559,11 @@
item, explaining its use.
- Chapter 9: You've made a mess of it
+9) You've made a mess of it
+---------------------------
That's OK, we all do. You've probably been told by your long-time Unix
-user helper that "GNU emacs" automatically formats the C sources for
+user helper that ``GNU emacs`` automatically formats the C sources for
you, and you've noticed that yes, it does do that, but the defaults it
uses are less than desirable (in fact, they are worse than random
typing - an infinite number of monkeys typing into GNU emacs would never
@@ -503,63 +572,66 @@
So, you can either get rid of GNU emacs, or change it to use saner
values. To do the latter, you can stick the following in your .emacs file:
-(defun c-lineup-arglist-tabs-only (ignored)
- "Line up argument lists by tabs, not spaces"
- (let* ((anchor (c-langelem-pos c-syntactic-element))
- (column (c-langelem-2nd-pos c-syntactic-element))
- (offset (- (1+ column) anchor))
- (steps (floor offset c-basic-offset)))
- (* (max steps 1)
- c-basic-offset)))
+.. code-block:: none
-(add-hook 'c-mode-common-hook
- (lambda ()
- ;; Add kernel style
- (c-add-style
- "linux-tabs-only"
- '("linux" (c-offsets-alist
- (arglist-cont-nonempty
- c-lineup-gcc-asm-reg
- c-lineup-arglist-tabs-only))))))
+ (defun c-lineup-arglist-tabs-only (ignored)
+ "Line up argument lists by tabs, not spaces"
+ (let* ((anchor (c-langelem-pos c-syntactic-element))
+ (column (c-langelem-2nd-pos c-syntactic-element))
+ (offset (- (1+ column) anchor))
+ (steps (floor offset c-basic-offset)))
+ (* (max steps 1)
+ c-basic-offset)))
-(add-hook 'c-mode-hook
- (lambda ()
- (let ((filename (buffer-file-name)))
- ;; Enable kernel mode for the appropriate files
- (when (and filename
- (string-match (expand-file-name "~/src/linux-trees")
- filename))
- (setq indent-tabs-mode t)
- (setq show-trailing-whitespace t)
- (c-set-style "linux-tabs-only")))))
+ (add-hook 'c-mode-common-hook
+ (lambda ()
+ ;; Add kernel style
+ (c-add-style
+ "linux-tabs-only"
+ '("linux" (c-offsets-alist
+ (arglist-cont-nonempty
+ c-lineup-gcc-asm-reg
+ c-lineup-arglist-tabs-only))))))
+
+ (add-hook 'c-mode-hook
+ (lambda ()
+ (let ((filename (buffer-file-name)))
+ ;; Enable kernel mode for the appropriate files
+ (when (and filename
+ (string-match (expand-file-name "~/src/linux-trees")
+ filename))
+ (setq indent-tabs-mode t)
+ (setq show-trailing-whitespace t)
+ (c-set-style "linux-tabs-only")))))
This will make emacs go better with the kernel coding style for C
-files below ~/src/linux-trees.
+files below ``~/src/linux-trees``.
But even if you fail in getting emacs to do sane formatting, not
-everything is lost: use "indent".
+everything is lost: use ``indent``.
Now, again, GNU indent has the same brain-dead settings that GNU emacs
has, which is why you need to give it a few command line options.
However, that's not too bad, because even the makers of GNU indent
recognize the authority of K&R (the GNU people aren't evil, they are
just severely misguided in this matter), so you just give indent the
-options "-kr -i8" (stands for "K&R, 8 character indents"), or use
-"scripts/Lindent", which indents in the latest style.
+options ``-kr -i8`` (stands for ``K&R, 8 character indents``), or use
+``scripts/Lindent``, which indents in the latest style.
-"indent" has a lot of options, and especially when it comes to comment
+``indent`` has a lot of options, and especially when it comes to comment
re-formatting you may want to take a look at the man page. But
-remember: "indent" is not a fix for bad programming.
+remember: ``indent`` is not a fix for bad programming.
- Chapter 10: Kconfig configuration files
+10) Kconfig configuration files
+-------------------------------
For all of the Kconfig* configuration files throughout the source tree,
-the indentation is somewhat different. Lines under a "config" definition
+the indentation is somewhat different. Lines under a ``config`` definition
are indented with one tab, while help text is indented an additional two
-spaces. Example:
+spaces. Example::
-config AUDIT
+ config AUDIT
bool "Auditing support"
depends on NET
help
@@ -569,9 +641,9 @@
auditing without CONFIG_AUDITSYSCALL.
Seriously dangerous features (such as write support for certain
-filesystems) should advertise this prominently in their prompt string:
+filesystems) should advertise this prominently in their prompt string::
-config ADFS_FS_RW
+ config ADFS_FS_RW
bool "ADFS write support (DANGEROUS)"
depends on ADFS_FS
...
@@ -580,41 +652,45 @@
Documentation/kbuild/kconfig-language.txt.
- Chapter 11: Data structures
+11) Data structures
+-------------------
Data structures that have visibility outside the single-threaded
environment they are created and destroyed in should always have
reference counts. In the kernel, garbage collection doesn't exist (and
outside the kernel garbage collection is slow and inefficient), which
-means that you absolutely _have_ to reference count all your uses.
+means that you absolutely **have** to reference count all your uses.
Reference counting means that you can avoid locking, and allows multiple
users to have access to the data structure in parallel - and not having
to worry about the structure suddenly going away from under them just
because they slept or did something else for a while.
-Note that locking is _not_ a replacement for reference counting.
+Note that locking is **not** a replacement for reference counting.
Locking is used to keep data structures coherent, while reference
counting is a memory management technique. Usually both are needed, and
they are not to be confused with each other.
Many data structures can indeed have two levels of reference counting,
-when there are users of different "classes". The subclass count counts
+when there are users of different ``classes``. The subclass count counts
the number of subclass users, and decrements the global count just once
when the subclass count goes to zero.
-Examples of this kind of "multi-level-reference-counting" can be found in
-memory management ("struct mm_struct": mm_users and mm_count), and in
-filesystem code ("struct super_block": s_count and s_active).
+Examples of this kind of ``multi-level-reference-counting`` can be found in
+memory management (``struct mm_struct``: mm_users and mm_count), and in
+filesystem code (``struct super_block``: s_count and s_active).
Remember: if another thread can find your data structure, and you don't
have a reference count on it, you almost certainly have a bug.
- Chapter 12: Macros, Enums and RTL
+12) Macros, Enums and RTL
+-------------------------
Names of macros defining constants and labels in enums are capitalized.
+.. code-block:: c
+
#define CONSTANT 0x12345
Enums are preferred when defining several related constants.
@@ -626,7 +702,9 @@
Macros with multiple statements should be enclosed in a do - while block:
- #define macrofun(a, b, c) \
+.. code-block:: c
+
+ #define macrofun(a, b, c) \
do { \
if (a == 5) \
do_this(b, c); \
@@ -636,17 +714,21 @@
1) macros that affect control flow:
+.. code-block:: c
+
#define FOO(x) \
do { \
if (blah(x) < 0) \
return -EBUGGERED; \
} while (0)
-is a _very_ bad idea. It looks like a function call but exits the "calling"
+is a **very** bad idea. It looks like a function call but exits the ``calling``
function; don't break the internal parsers of those who will read the code.
2) macros that depend on having a local variable with a magic name:
+.. code-block:: c
+
#define FOO(val) bar(index, val)
might look like a good thing, but it's confusing as hell when one reads the
@@ -659,18 +741,22 @@
must enclose the expression in parentheses. Beware of similar issues with
macros using parameters.
+.. code-block:: c
+
#define CONSTANT 0x4000
#define CONSTEXP (CONSTANT | 3)
5) namespace collisions when defining local variables in macros resembling
functions:
-#define FOO(x) \
-({ \
- typeof(x) ret; \
- ret = calc_ret(x); \
- (ret); \
-})
+.. code-block:: c
+
+ #define FOO(x) \
+ ({ \
+ typeof(x) ret; \
+ ret = calc_ret(x); \
+ (ret); \
+ })
ret is a common name for a local variable - __foo_ret is less likely
to collide with an existing variable.
@@ -679,11 +765,12 @@
covers RTL which is used frequently with assembly language in the kernel.
- Chapter 13: Printing kernel messages
+13) Printing kernel messages
+----------------------------
Kernel developers like to be seen as literate. Do mind the spelling
of kernel messages to make a good impression. Do not use crippled
-words like "dont"; use "do not" or "don't" instead. Make the messages
+words like ``dont``; use ``do not`` or ``don't`` instead. Make the messages
concise, clear, and unambiguous.
Kernel messages do not have to be terminated with a period.
@@ -713,7 +800,8 @@
used.
- Chapter 14: Allocating memory
+14) Allocating memory
+---------------------
The kernel provides the following general purpose memory allocators:
kmalloc(), kzalloc(), kmalloc_array(), kcalloc(), vmalloc(), and
@@ -722,6 +810,8 @@
The preferred form for passing a size of a struct is the following:
+.. code-block:: c
+
p = kmalloc(sizeof(*p), ...);
The alternative form where struct name is spelled out hurts readability and
@@ -734,20 +824,25 @@
The preferred form for allocating an array is the following:
+.. code-block:: c
+
p = kmalloc_array(n, sizeof(...), ...);
The preferred form for allocating a zeroed array is the following:
+.. code-block:: c
+
p = kcalloc(n, sizeof(...), ...);
Both forms check for overflow on the allocation size n * sizeof(...),
and return NULL if that occurred.
- Chapter 15: The inline disease
+15) The inline disease
+----------------------
There appears to be a common misperception that gcc has a magic "make me
-faster" speedup option called "inline". While the use of inlines can be
+faster" speedup option called ``inline``. While the use of inlines can be
appropriate (for example as a means of replacing macros, see Chapter 12), it
very often is not. Abundant use of the inline keyword leads to a much bigger
kernel, which in turn slows the system as a whole down, due to a bigger
@@ -771,26 +866,27 @@
something it would have done anyway.
- Chapter 16: Function return values and names
+16) Function return values and names
+------------------------------------
Functions can return values of many different kinds, and one of the
most common is a value indicating whether the function succeeded or
failed. Such a value can be represented as an error-code integer
-(-Exxx = failure, 0 = success) or a "succeeded" boolean (0 = failure,
+(-Exxx = failure, 0 = success) or a ``succeeded`` boolean (0 = failure,
non-zero = success).
Mixing up these two sorts of representations is a fertile source of
difficult-to-find bugs. If the C language included a strong distinction
between integers and booleans then the compiler would find these mistakes
for us... but it doesn't. To help prevent such bugs, always follow this
-convention:
+convention::
If the name of a function is an action or an imperative command,
the function should return an error-code integer. If the name
is a predicate, the function should return a "succeeded" boolean.
-For example, "add work" is a command, and the add_work() function returns 0
-for success or -EBUSY for failure. In the same way, "PCI device present" is
+For example, ``add work`` is a command, and the add_work() function returns 0
+for success or -EBUSY for failure. In the same way, ``PCI device present`` is
a predicate, and the pci_dev_present() function returns 1 if it succeeds in
finding a matching device or 0 if it doesn't.
@@ -805,17 +901,22 @@
NULL or the ERR_PTR mechanism to report failure.
- Chapter 17: Don't re-invent the kernel macros
+17) Don't re-invent the kernel macros
+-------------------------------------
The header file include/linux/kernel.h contains a number of macros that
you should use, rather than explicitly coding some variant of them yourself.
For example, if you need to calculate the length of an array, take advantage
of the macro
+.. code-block:: c
+
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
Similarly, if you need to calculate the size of some structure member, use
+.. code-block:: c
+
#define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
There are also min() and max() macros that do strict type checking if you
@@ -823,16 +924,21 @@
defined that you shouldn't reproduce in your code.
- Chapter 18: Editor modelines and other cruft
+18) Editor modelines and other cruft
+------------------------------------
Some editors can interpret configuration information embedded in source files,
indicated with special markers. For example, emacs interprets lines marked
like this:
+.. code-block:: c
+
-*- mode: c -*-
Or like this:
+.. code-block:: c
+
/*
Local Variables:
compile-command: "gcc -DMAGIC_DEBUG_FLAG foo.c"
@@ -841,6 +947,8 @@
Vim interprets markers that look like this:
+.. code-block:: c
+
/* vim:set sw=8 noet */
Do not include any of these in source files. People have their own personal
@@ -850,7 +958,8 @@
work correctly.
- Chapter 19: Inline assembly
+19) Inline assembly
+-------------------
In architecture-specific code, you may need to use inline assembly to interface
with CPU or platform functionality. Don't hesitate to do so when necessary.
@@ -863,7 +972,7 @@
Large, non-trivial assembly functions should go in .S files, with corresponding
C prototypes defined in C header files. The C prototypes for assembly
-functions should use "asmlinkage".
+functions should use ``asmlinkage``.
You may need to mark your asm statement as volatile, to prevent GCC from
removing it if GCC doesn't notice any side effects. You don't always need to
@@ -874,12 +983,15 @@
string, and end each string except the last with \n\t to properly indent the
next instruction in the assembly output:
+.. code-block:: c
+
asm ("magic %reg1, #42\n\t"
"more_magic %reg2, %reg3"
: /* outputs */ : /* inputs */ : /* clobbers */);
- Chapter 20: Conditional Compilation
+20) Conditional Compilation
+---------------------------
Wherever possible, don't use preprocessor conditionals (#if, #ifdef) in .c
files; doing so makes code harder to read and logic harder to follow. Instead,
@@ -903,6 +1015,8 @@
Within code, where possible, use the IS_ENABLED macro to convert a Kconfig
symbol into a C boolean expression, and use it in a normal C conditional:
+.. code-block:: c
+
if (IS_ENABLED(CONFIG_SOMETHING)) {
...
}
@@ -918,12 +1032,15 @@
place a comment after the #endif on the same line, noting the conditional
expression used. For instance:
+.. code-block:: c
+
#ifdef CONFIG_SOMETHING
...
#endif /* CONFIG_SOMETHING */
- Appendix I: References
+Appendix I) References
+----------------------
The C Programming Language, Second Edition
by Brian W. Kernighan and Dennis M. Ritchie.
@@ -943,4 +1060,3 @@
Kernel CodingStyle, by greg@kroah.com at OLS 2002:
http://www.kroah.com/linux/talks/ols_2002_kernel_codingstyle_talk/html/
-
diff --git a/Documentation/DMA-API-HOWTO.txt b/Documentation/DMA-API-HOWTO.txt
index 781024e..979228b 100644
--- a/Documentation/DMA-API-HOWTO.txt
+++ b/Documentation/DMA-API-HOWTO.txt
@@ -699,7 +699,7 @@
dma_addr_t mapping;
mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
- if (dma_mapping_error(cp->dev, dma_handle)) {
+ if (dma_mapping_error(cp->dev, mapping)) {
/*
* reduce current DMA mapping usage,
* delay and try again later or
@@ -931,10 +931,8 @@
1) Struct scatterlist requirements.
- Don't invent the architecture specific struct scatterlist; just use
- <asm-generic/scatterlist.h>. You need to enable
- CONFIG_NEED_SG_DMA_LENGTH if the architecture supports IOMMUs
- (including software IOMMU).
+ You need to enable CONFIG_NEED_SG_DMA_LENGTH if the architecture
+ supports IOMMUs (including software IOMMU).
2) ARCH_DMA_MINALIGN
diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
index 1d26eeb6..6b20128 100644
--- a/Documentation/DMA-API.txt
+++ b/Documentation/DMA-API.txt
@@ -277,14 +277,26 @@
recommended that you never use these unless you really know what the
cache width is.
+dma_addr_t
+dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
+ enum dma_data_direction dir, unsigned long attrs)
+
+void
+dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
+ enum dma_data_direction dir, unsigned long attrs)
+
+API for mapping and unmapping for MMIO resources. All the notes and
+warnings for the other mapping APIs apply here. The API should only be
+used to map device MMIO resources, mapping of RAM is not permitted.
+
int
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
-In some circumstances dma_map_single() and dma_map_page() will fail to create
-a mapping. A driver can check for these errors by testing the returned
-DMA address with dma_mapping_error(). A non-zero return value means the mapping
-could not be created and the driver should take appropriate action (e.g.
-reduce current DMA mapping usage or delay and try again later).
+In some circumstances dma_map_single(), dma_map_page() and dma_map_resource()
+will fail to create a mapping. A driver can check for these errors by testing
+the returned DMA address with dma_mapping_error(). A non-zero return value
+means the mapping could not be created and the driver should take appropriate
+action (e.g. reduce current DMA mapping usage or delay and try again later).
int
dma_map_sg(struct device *dev, struct scatterlist *sg,
diff --git a/Documentation/DMA-attributes.txt b/Documentation/DMA-attributes.txt
index 2d455a5..98bf7ac 100644
--- a/Documentation/DMA-attributes.txt
+++ b/Documentation/DMA-attributes.txt
@@ -126,3 +126,20 @@
NOTE: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM,
though ARM64 patches will likely be posted soon.
+
+DMA_ATTR_NO_WARN
+----------------
+
+This tells the DMA-mapping subsystem to suppress allocation failure reports
+(similarly to __GFP_NOWARN).
+
+On some architectures allocation failures are reported with error messages
+to the system logs. Although this can help to identify and debug problems,
+drivers which handle failures (eg, retry later) have no problems with them,
+and can actually flood the system logs with error messages that aren't any
+problem at all, depending on the implementation of the retry mechanism.
+
+So, this provides a way for drivers to avoid those error messages on calls
+where allocation failures are not a problem, and shouldn't bother the logs.
+
+NOTE: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC.
diff --git a/Documentation/DocBook/80211.tmpl b/Documentation/DocBook/80211.tmpl
deleted file mode 100644
index 800fe7a..0000000
--- a/Documentation/DocBook/80211.tmpl
+++ /dev/null
@@ -1,584 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE set PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
- "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []>
-<set>
- <setinfo>
- <title>The 802.11 subsystems – for kernel developers</title>
- <subtitle>
- Explaining wireless 802.11 networking in the Linux kernel
- </subtitle>
-
- <copyright>
- <year>2007-2009</year>
- <holder>Johannes Berg</holder>
- </copyright>
-
- <authorgroup>
- <author>
- <firstname>Johannes</firstname>
- <surname>Berg</surname>
- <affiliation>
- <address><email>johannes@sipsolutions.net</email></address>
- </affiliation>
- </author>
- </authorgroup>
-
- <legalnotice>
- <para>
- This documentation is free software; you can redistribute
- it and/or modify it under the terms of the GNU General Public
- License version 2 as published by the Free Software Foundation.
- </para>
- <para>
- This documentation is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied
- warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
- See the GNU General Public License for more details.
- </para>
- <para>
- You should have received a copy of the GNU General Public
- License along with this documentation; if not, write to the Free
- Software Foundation, Inc., 59 Temple Place, Suite 330, Boston,
- MA 02111-1307 USA
- </para>
- <para>
- For more details see the file COPYING in the source
- distribution of Linux.
- </para>
- </legalnotice>
-
- <abstract>
- <para>
- These books attempt to give a description of the
- various subsystems that play a role in 802.11 wireless
- networking in Linux. Since these books are for kernel
- developers they attempts to document the structures
- and functions used in the kernel as well as giving a
- higher-level overview.
- </para>
- <para>
- The reader is expected to be familiar with the 802.11
- standard as published by the IEEE in 802.11-2007 (or
- possibly later versions). References to this standard
- will be given as "802.11-2007 8.1.5".
- </para>
- </abstract>
- </setinfo>
- <book id="cfg80211-developers-guide">
- <bookinfo>
- <title>The cfg80211 subsystem</title>
-
- <abstract>
-!Pinclude/net/cfg80211.h Introduction
- </abstract>
- </bookinfo>
- <chapter>
- <title>Device registration</title>
-!Pinclude/net/cfg80211.h Device registration
-!Finclude/net/cfg80211.h ieee80211_channel_flags
-!Finclude/net/cfg80211.h ieee80211_channel
-!Finclude/net/cfg80211.h ieee80211_rate_flags
-!Finclude/net/cfg80211.h ieee80211_rate
-!Finclude/net/cfg80211.h ieee80211_sta_ht_cap
-!Finclude/net/cfg80211.h ieee80211_supported_band
-!Finclude/net/cfg80211.h cfg80211_signal_type
-!Finclude/net/cfg80211.h wiphy_params_flags
-!Finclude/net/cfg80211.h wiphy_flags
-!Finclude/net/cfg80211.h wiphy
-!Finclude/net/cfg80211.h wireless_dev
-!Finclude/net/cfg80211.h wiphy_new
-!Finclude/net/cfg80211.h wiphy_register
-!Finclude/net/cfg80211.h wiphy_unregister
-!Finclude/net/cfg80211.h wiphy_free
-
-!Finclude/net/cfg80211.h wiphy_name
-!Finclude/net/cfg80211.h wiphy_dev
-!Finclude/net/cfg80211.h wiphy_priv
-!Finclude/net/cfg80211.h priv_to_wiphy
-!Finclude/net/cfg80211.h set_wiphy_dev
-!Finclude/net/cfg80211.h wdev_priv
-!Finclude/net/cfg80211.h ieee80211_iface_limit
-!Finclude/net/cfg80211.h ieee80211_iface_combination
-!Finclude/net/cfg80211.h cfg80211_check_combinations
- </chapter>
- <chapter>
- <title>Actions and configuration</title>
-!Pinclude/net/cfg80211.h Actions and configuration
-!Finclude/net/cfg80211.h cfg80211_ops
-!Finclude/net/cfg80211.h vif_params
-!Finclude/net/cfg80211.h key_params
-!Finclude/net/cfg80211.h survey_info_flags
-!Finclude/net/cfg80211.h survey_info
-!Finclude/net/cfg80211.h cfg80211_beacon_data
-!Finclude/net/cfg80211.h cfg80211_ap_settings
-!Finclude/net/cfg80211.h station_parameters
-!Finclude/net/cfg80211.h rate_info_flags
-!Finclude/net/cfg80211.h rate_info
-!Finclude/net/cfg80211.h station_info
-!Finclude/net/cfg80211.h monitor_flags
-!Finclude/net/cfg80211.h mpath_info_flags
-!Finclude/net/cfg80211.h mpath_info
-!Finclude/net/cfg80211.h bss_parameters
-!Finclude/net/cfg80211.h ieee80211_txq_params
-!Finclude/net/cfg80211.h cfg80211_crypto_settings
-!Finclude/net/cfg80211.h cfg80211_auth_request
-!Finclude/net/cfg80211.h cfg80211_assoc_request
-!Finclude/net/cfg80211.h cfg80211_deauth_request
-!Finclude/net/cfg80211.h cfg80211_disassoc_request
-!Finclude/net/cfg80211.h cfg80211_ibss_params
-!Finclude/net/cfg80211.h cfg80211_connect_params
-!Finclude/net/cfg80211.h cfg80211_pmksa
-!Finclude/net/cfg80211.h cfg80211_rx_mlme_mgmt
-!Finclude/net/cfg80211.h cfg80211_auth_timeout
-!Finclude/net/cfg80211.h cfg80211_rx_assoc_resp
-!Finclude/net/cfg80211.h cfg80211_assoc_timeout
-!Finclude/net/cfg80211.h cfg80211_tx_mlme_mgmt
-!Finclude/net/cfg80211.h cfg80211_ibss_joined
-!Finclude/net/cfg80211.h cfg80211_connect_result
-!Finclude/net/cfg80211.h cfg80211_connect_bss
-!Finclude/net/cfg80211.h cfg80211_connect_timeout
-!Finclude/net/cfg80211.h cfg80211_roamed
-!Finclude/net/cfg80211.h cfg80211_disconnected
-!Finclude/net/cfg80211.h cfg80211_ready_on_channel
-!Finclude/net/cfg80211.h cfg80211_remain_on_channel_expired
-!Finclude/net/cfg80211.h cfg80211_new_sta
-!Finclude/net/cfg80211.h cfg80211_rx_mgmt
-!Finclude/net/cfg80211.h cfg80211_mgmt_tx_status
-!Finclude/net/cfg80211.h cfg80211_cqm_rssi_notify
-!Finclude/net/cfg80211.h cfg80211_cqm_pktloss_notify
-!Finclude/net/cfg80211.h cfg80211_michael_mic_failure
- </chapter>
- <chapter>
- <title>Scanning and BSS list handling</title>
-!Pinclude/net/cfg80211.h Scanning and BSS list handling
-!Finclude/net/cfg80211.h cfg80211_ssid
-!Finclude/net/cfg80211.h cfg80211_scan_request
-!Finclude/net/cfg80211.h cfg80211_scan_done
-!Finclude/net/cfg80211.h cfg80211_bss
-!Finclude/net/cfg80211.h cfg80211_inform_bss
-!Finclude/net/cfg80211.h cfg80211_inform_bss_frame_data
-!Finclude/net/cfg80211.h cfg80211_inform_bss_data
-!Finclude/net/cfg80211.h cfg80211_unlink_bss
-!Finclude/net/cfg80211.h cfg80211_find_ie
-!Finclude/net/cfg80211.h ieee80211_bss_get_ie
- </chapter>
- <chapter>
- <title>Utility functions</title>
-!Pinclude/net/cfg80211.h Utility functions
-!Finclude/net/cfg80211.h ieee80211_channel_to_frequency
-!Finclude/net/cfg80211.h ieee80211_frequency_to_channel
-!Finclude/net/cfg80211.h ieee80211_get_channel
-!Finclude/net/cfg80211.h ieee80211_get_response_rate
-!Finclude/net/cfg80211.h ieee80211_hdrlen
-!Finclude/net/cfg80211.h ieee80211_get_hdrlen_from_skb
-!Finclude/net/cfg80211.h ieee80211_radiotap_iterator
- </chapter>
- <chapter>
- <title>Data path helpers</title>
-!Pinclude/net/cfg80211.h Data path helpers
-!Finclude/net/cfg80211.h ieee80211_data_to_8023
-!Finclude/net/cfg80211.h ieee80211_data_from_8023
-!Finclude/net/cfg80211.h ieee80211_amsdu_to_8023s
-!Finclude/net/cfg80211.h cfg80211_classify8021d
- </chapter>
- <chapter>
- <title>Regulatory enforcement infrastructure</title>
-!Pinclude/net/cfg80211.h Regulatory enforcement infrastructure
-!Finclude/net/cfg80211.h regulatory_hint
-!Finclude/net/cfg80211.h wiphy_apply_custom_regulatory
-!Finclude/net/cfg80211.h freq_reg_info
- </chapter>
- <chapter>
- <title>RFkill integration</title>
-!Pinclude/net/cfg80211.h RFkill integration
-!Finclude/net/cfg80211.h wiphy_rfkill_set_hw_state
-!Finclude/net/cfg80211.h wiphy_rfkill_start_polling
-!Finclude/net/cfg80211.h wiphy_rfkill_stop_polling
- </chapter>
- <chapter>
- <title>Test mode</title>
-!Pinclude/net/cfg80211.h Test mode
-!Finclude/net/cfg80211.h cfg80211_testmode_alloc_reply_skb
-!Finclude/net/cfg80211.h cfg80211_testmode_reply
-!Finclude/net/cfg80211.h cfg80211_testmode_alloc_event_skb
-!Finclude/net/cfg80211.h cfg80211_testmode_event
- </chapter>
- </book>
- <book id="mac80211-developers-guide">
- <bookinfo>
- <title>The mac80211 subsystem</title>
- <abstract>
-!Pinclude/net/mac80211.h Introduction
-!Pinclude/net/mac80211.h Warning
- </abstract>
- </bookinfo>
-
- <toc></toc>
-
- <!--
- Generally, this document shall be ordered by increasing complexity.
- It is important to note that readers should be able to read only
- the first few sections to get a working driver and only advanced
- usage should require reading the full document.
- -->
-
- <part>
- <title>The basic mac80211 driver interface</title>
- <partintro>
- <para>
- You should read and understand the information contained
- within this part of the book while implementing a driver.
- In some chapters, advanced usage is noted, that may be
- skipped at first.
- </para>
- <para>
- This part of the book only covers station and monitor mode
- functionality, additional information required to implement
- the other modes is covered in the second part of the book.
- </para>
- </partintro>
-
- <chapter id="basics">
- <title>Basic hardware handling</title>
- <para>TBD</para>
- <para>
- This chapter shall contain information on getting a hw
- struct allocated and registered with mac80211.
- </para>
- <para>
- Since it is required to allocate rates/modes before registering
- a hw struct, this chapter shall also contain information on setting
- up the rate/mode structs.
- </para>
- <para>
- Additionally, some discussion about the callbacks and
- the general programming model should be in here, including
- the definition of ieee80211_ops which will be referred to
- a lot.
- </para>
- <para>
- Finally, a discussion of hardware capabilities should be done
- with references to other parts of the book.
- </para>
- <!-- intentionally multiple !F lines to get proper order -->
-!Finclude/net/mac80211.h ieee80211_hw
-!Finclude/net/mac80211.h ieee80211_hw_flags
-!Finclude/net/mac80211.h SET_IEEE80211_DEV
-!Finclude/net/mac80211.h SET_IEEE80211_PERM_ADDR
-!Finclude/net/mac80211.h ieee80211_ops
-!Finclude/net/mac80211.h ieee80211_alloc_hw
-!Finclude/net/mac80211.h ieee80211_register_hw
-!Finclude/net/mac80211.h ieee80211_unregister_hw
-!Finclude/net/mac80211.h ieee80211_free_hw
- </chapter>
-
- <chapter id="phy-handling">
- <title>PHY configuration</title>
- <para>TBD</para>
- <para>
- This chapter should describe PHY handling including
- start/stop callbacks and the various structures used.
- </para>
-!Finclude/net/mac80211.h ieee80211_conf
-!Finclude/net/mac80211.h ieee80211_conf_flags
- </chapter>
-
- <chapter id="iface-handling">
- <title>Virtual interfaces</title>
- <para>TBD</para>
- <para>
- This chapter should describe virtual interface basics
- that are relevant to the driver (VLANs, MGMT etc are not.)
- It should explain the use of the add_iface/remove_iface
- callbacks as well as the interface configuration callbacks.
- </para>
- <para>Things related to AP mode should be discussed there.</para>
- <para>
- Things related to supporting multiple interfaces should be
- in the appropriate chapter, a BIG FAT note should be here about
- this though and the recommendation to allow only a single
- interface in STA mode at first!
- </para>
-!Finclude/net/mac80211.h ieee80211_vif
- </chapter>
-
- <chapter id="rx-tx">
- <title>Receive and transmit processing</title>
- <sect1>
- <title>what should be here</title>
- <para>TBD</para>
- <para>
- This should describe the receive and transmit
- paths in mac80211/the drivers as well as
- transmit status handling.
- </para>
- </sect1>
- <sect1>
- <title>Frame format</title>
-!Pinclude/net/mac80211.h Frame format
- </sect1>
- <sect1>
- <title>Packet alignment</title>
-!Pnet/mac80211/rx.c Packet alignment
- </sect1>
- <sect1>
- <title>Calling into mac80211 from interrupts</title>
-!Pinclude/net/mac80211.h Calling mac80211 from interrupts
- </sect1>
- <sect1>
- <title>functions/definitions</title>
-!Finclude/net/mac80211.h ieee80211_rx_status
-!Finclude/net/mac80211.h mac80211_rx_flags
-!Finclude/net/mac80211.h mac80211_tx_info_flags
-!Finclude/net/mac80211.h mac80211_tx_control_flags
-!Finclude/net/mac80211.h mac80211_rate_control_flags
-!Finclude/net/mac80211.h ieee80211_tx_rate
-!Finclude/net/mac80211.h ieee80211_tx_info
-!Finclude/net/mac80211.h ieee80211_tx_info_clear_status
-!Finclude/net/mac80211.h ieee80211_rx
-!Finclude/net/mac80211.h ieee80211_rx_ni
-!Finclude/net/mac80211.h ieee80211_rx_irqsafe
-!Finclude/net/mac80211.h ieee80211_tx_status
-!Finclude/net/mac80211.h ieee80211_tx_status_ni
-!Finclude/net/mac80211.h ieee80211_tx_status_irqsafe
-!Finclude/net/mac80211.h ieee80211_rts_get
-!Finclude/net/mac80211.h ieee80211_rts_duration
-!Finclude/net/mac80211.h ieee80211_ctstoself_get
-!Finclude/net/mac80211.h ieee80211_ctstoself_duration
-!Finclude/net/mac80211.h ieee80211_generic_frame_duration
-!Finclude/net/mac80211.h ieee80211_wake_queue
-!Finclude/net/mac80211.h ieee80211_stop_queue
-!Finclude/net/mac80211.h ieee80211_wake_queues
-!Finclude/net/mac80211.h ieee80211_stop_queues
-!Finclude/net/mac80211.h ieee80211_queue_stopped
- </sect1>
- </chapter>
-
- <chapter id="filters">
- <title>Frame filtering</title>
-!Pinclude/net/mac80211.h Frame filtering
-!Finclude/net/mac80211.h ieee80211_filter_flags
- </chapter>
-
- <chapter id="workqueue">
- <title>The mac80211 workqueue</title>
-!Pinclude/net/mac80211.h mac80211 workqueue
-!Finclude/net/mac80211.h ieee80211_queue_work
-!Finclude/net/mac80211.h ieee80211_queue_delayed_work
- </chapter>
- </part>
-
- <part id="advanced">
- <title>Advanced driver interface</title>
- <partintro>
- <para>
- Information contained within this part of the book is
- of interest only for advanced interaction of mac80211
- with drivers to exploit more hardware capabilities and
- improve performance.
- </para>
- </partintro>
-
- <chapter id="led-support">
- <title>LED support</title>
- <para>
- Mac80211 supports various ways of blinking LEDs. Wherever possible,
- device LEDs should be exposed as LED class devices and hooked up to
- the appropriate trigger, which will then be triggered appropriately
- by mac80211.
- </para>
-!Finclude/net/mac80211.h ieee80211_get_tx_led_name
-!Finclude/net/mac80211.h ieee80211_get_rx_led_name
-!Finclude/net/mac80211.h ieee80211_get_assoc_led_name
-!Finclude/net/mac80211.h ieee80211_get_radio_led_name
-!Finclude/net/mac80211.h ieee80211_tpt_blink
-!Finclude/net/mac80211.h ieee80211_tpt_led_trigger_flags
-!Finclude/net/mac80211.h ieee80211_create_tpt_led_trigger
- </chapter>
-
- <chapter id="hardware-crypto-offload">
- <title>Hardware crypto acceleration</title>
-!Pinclude/net/mac80211.h Hardware crypto acceleration
- <!-- intentionally multiple !F lines to get proper order -->
-!Finclude/net/mac80211.h set_key_cmd
-!Finclude/net/mac80211.h ieee80211_key_conf
-!Finclude/net/mac80211.h ieee80211_key_flags
-!Finclude/net/mac80211.h ieee80211_get_tkip_p1k
-!Finclude/net/mac80211.h ieee80211_get_tkip_p1k_iv
-!Finclude/net/mac80211.h ieee80211_get_tkip_p2k
- </chapter>
-
- <chapter id="powersave">
- <title>Powersave support</title>
-!Pinclude/net/mac80211.h Powersave support
- </chapter>
-
- <chapter id="beacon-filter">
- <title>Beacon filter support</title>
-!Pinclude/net/mac80211.h Beacon filter support
-!Finclude/net/mac80211.h ieee80211_beacon_loss
- </chapter>
-
- <chapter id="qos">
- <title>Multiple queues and QoS support</title>
- <para>TBD</para>
-!Finclude/net/mac80211.h ieee80211_tx_queue_params
- </chapter>
-
- <chapter id="AP">
- <title>Access point mode support</title>
- <para>TBD</para>
- <para>Some parts of the if_conf should be discussed here instead</para>
- <para>
- Insert notes about VLAN interfaces with hw crypto here or
- in the hw crypto chapter.
- </para>
- <section id="ps-client">
- <title>support for powersaving clients</title>
-!Pinclude/net/mac80211.h AP support for powersaving clients
-!Finclude/net/mac80211.h ieee80211_get_buffered_bc
-!Finclude/net/mac80211.h ieee80211_beacon_get
-!Finclude/net/mac80211.h ieee80211_sta_eosp
-!Finclude/net/mac80211.h ieee80211_frame_release_type
-!Finclude/net/mac80211.h ieee80211_sta_ps_transition
-!Finclude/net/mac80211.h ieee80211_sta_ps_transition_ni
-!Finclude/net/mac80211.h ieee80211_sta_set_buffered
-!Finclude/net/mac80211.h ieee80211_sta_block_awake
- </section>
- </chapter>
-
- <chapter id="multi-iface">
- <title>Supporting multiple virtual interfaces</title>
- <para>TBD</para>
- <para>
- Note: WDS with identical MAC address should almost always be OK
- </para>
- <para>
- Insert notes about having multiple virtual interfaces with
- different MAC addresses here, note which configurations are
- supported by mac80211, add notes about supporting hw crypto
- with it.
- </para>
-!Finclude/net/mac80211.h ieee80211_iterate_active_interfaces
-!Finclude/net/mac80211.h ieee80211_iterate_active_interfaces_atomic
- </chapter>
-
- <chapter id="station-handling">
- <title>Station handling</title>
- <para>TODO</para>
-!Finclude/net/mac80211.h ieee80211_sta
-!Finclude/net/mac80211.h sta_notify_cmd
-!Finclude/net/mac80211.h ieee80211_find_sta
-!Finclude/net/mac80211.h ieee80211_find_sta_by_ifaddr
- </chapter>
-
- <chapter id="hardware-scan-offload">
- <title>Hardware scan offload</title>
- <para>TBD</para>
-!Finclude/net/mac80211.h ieee80211_scan_completed
- </chapter>
-
- <chapter id="aggregation">
- <title>Aggregation</title>
- <sect1>
- <title>TX A-MPDU aggregation</title>
-!Pnet/mac80211/agg-tx.c TX A-MPDU aggregation
-!Cnet/mac80211/agg-tx.c
- </sect1>
- <sect1>
- <title>RX A-MPDU aggregation</title>
-!Pnet/mac80211/agg-rx.c RX A-MPDU aggregation
-!Cnet/mac80211/agg-rx.c
-!Finclude/net/mac80211.h ieee80211_ampdu_mlme_action
- </sect1>
- </chapter>
-
- <chapter id="smps">
- <title>Spatial Multiplexing Powersave (SMPS)</title>
-!Pinclude/net/mac80211.h Spatial multiplexing power save
-!Finclude/net/mac80211.h ieee80211_request_smps
-!Finclude/net/mac80211.h ieee80211_smps_mode
- </chapter>
- </part>
-
- <part id="rate-control">
- <title>Rate control interface</title>
- <partintro>
- <para>TBD</para>
- <para>
- This part of the book describes the rate control algorithm
- interface and how it relates to mac80211 and drivers.
- </para>
- </partintro>
- <chapter id="ratecontrol-api">
- <title>Rate Control API</title>
- <para>TBD</para>
-!Finclude/net/mac80211.h ieee80211_start_tx_ba_session
-!Finclude/net/mac80211.h ieee80211_start_tx_ba_cb_irqsafe
-!Finclude/net/mac80211.h ieee80211_stop_tx_ba_session
-!Finclude/net/mac80211.h ieee80211_stop_tx_ba_cb_irqsafe
-!Finclude/net/mac80211.h ieee80211_rate_control_changed
-!Finclude/net/mac80211.h ieee80211_tx_rate_control
-!Finclude/net/mac80211.h rate_control_send_low
- </chapter>
- </part>
-
- <part id="internal">
- <title>Internals</title>
- <partintro>
- <para>TBD</para>
- <para>
- This part of the book describes mac80211 internals.
- </para>
- </partintro>
-
- <chapter id="key-handling">
- <title>Key handling</title>
- <sect1>
- <title>Key handling basics</title>
-!Pnet/mac80211/key.c Key handling basics
- </sect1>
- <sect1>
- <title>MORE TBD</title>
- <para>TBD</para>
- </sect1>
- </chapter>
-
- <chapter id="rx-processing">
- <title>Receive processing</title>
- <para>TBD</para>
- </chapter>
-
- <chapter id="tx-processing">
- <title>Transmit processing</title>
- <para>TBD</para>
- </chapter>
-
- <chapter id="sta-info">
- <title>Station info handling</title>
- <sect1>
- <title>Programming information</title>
-!Fnet/mac80211/sta_info.h sta_info
-!Fnet/mac80211/sta_info.h ieee80211_sta_info_flags
- </sect1>
- <sect1>
- <title>STA information lifetime rules</title>
-!Pnet/mac80211/sta_info.c STA information lifetime rules
- </sect1>
- </chapter>
-
- <chapter id="aggregation-internals">
- <title>Aggregation</title>
-!Fnet/mac80211/sta_info.h sta_ampdu_mlme
-!Fnet/mac80211/sta_info.h tid_ampdu_tx
-!Fnet/mac80211/sta_info.h tid_ampdu_rx
- </chapter>
-
- <chapter id="synchronisation">
- <title>Synchronisation</title>
- <para>TBD</para>
- <para>Locking, lots of RCU</para>
- </chapter>
- </part>
- </book>
-</set>
diff --git a/Documentation/DocBook/Makefile b/Documentation/DocBook/Makefile
index 64460a8..fdf8232 100644
--- a/Documentation/DocBook/Makefile
+++ b/Documentation/DocBook/Makefile
@@ -6,13 +6,13 @@
# To add a new book the only step required is to add the book to the
# list of DOCBOOKS.
-DOCBOOKS := z8530book.xml device-drivers.xml \
+DOCBOOKS := z8530book.xml \
kernel-hacking.xml kernel-locking.xml deviceiobook.xml \
writing_usb_driver.xml networking.xml \
kernel-api.xml filesystems.xml lsm.xml usb.xml kgdb.xml \
gadget.xml libata.xml mtdnand.xml librs.xml rapidio.xml \
genericirq.xml s390-drivers.xml uio-howto.xml scsi.xml \
- 80211.xml debugobjects.xml sh.xml regulator.xml \
+ debugobjects.xml sh.xml regulator.xml \
alsa-driver-api.xml writing-an-alsa-driver.xml \
tracepoint.xml w1.xml \
writing_musb_glue_layer.xml crypto-API.xml iio.xml
@@ -22,8 +22,14 @@
# Skip DocBook build if the user explicitly requested no DOCBOOKS.
.DEFAULT:
@echo " SKIP DocBook $@ target (DOCBOOKS=\"\" specified)."
-
else
+ifneq ($(SPHINXDIRS),)
+
+# Skip DocBook build if the user explicitly requested a sphinx dir
+.DEFAULT:
+ @echo " SKIP DocBook $@ target (SPHINXDIRS specified)."
+else
+
###
# The build process is as follows (targets):
@@ -66,6 +72,7 @@
# no-op for the DocBook toolchain
epubdocs:
+latexdocs:
###
#External programs used
@@ -221,6 +228,7 @@
echo "</programlisting>") > $@
endif # DOCBOOKS=""
+endif # SPHINDIR=...
###
# Help targets as used by the top-level makefile
diff --git a/Documentation/DocBook/crypto-API.tmpl b/Documentation/DocBook/crypto-API.tmpl
index fb2a152..088b79c 100644
--- a/Documentation/DocBook/crypto-API.tmpl
+++ b/Documentation/DocBook/crypto-API.tmpl
@@ -797,7 +797,8 @@
include/linux/crypto.h and their definition can be seen below.
The former function registers a single transformation, while
the latter works on an array of transformation descriptions.
- The latter is useful when registering transformations in bulk.
+ The latter is useful when registering transformations in bulk,
+ for example when a driver implements multiple transformations.
</para>
<programlisting>
@@ -822,18 +823,31 @@
</para>
<para>
- The bulk registration / unregistration functions require
- that struct crypto_alg is an array of count size. These
- functions simply loop over that array and register /
- unregister each individual algorithm. If an error occurs,
- the loop is terminated at the offending algorithm definition.
- That means, the algorithms prior to the offending algorithm
- are successfully registered. Note, the caller has no way of
- knowing which cipher implementations have successfully
- registered. If this is important to know, the caller should
- loop through the different implementations using the single
- instance *_alg functions for each individual implementation.
+ The bulk registration/unregistration functions
+ register/unregister each transformation in the given array of
+ length count. They handle errors as follows:
</para>
+ <itemizedlist>
+ <listitem>
+ <para>
+ crypto_register_algs() succeeds if and only if it
+ successfully registers all the given transformations. If an
+ error occurs partway through, then it rolls back successful
+ registrations before returning the error code. Note that if
+ a driver needs to handle registration errors for individual
+ transformations, then it will need to use the non-bulk
+ function crypto_register_alg() instead.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ crypto_unregister_algs() tries to unregister all the given
+ transformations, continuing on error. It logs errors and
+ always returns zero.
+ </para>
+ </listitem>
+ </itemizedlist>
+
</sect1>
<sect1><title>Single-Block Symmetric Ciphers [CIPHER]</title>
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
deleted file mode 100644
index 9c10030..0000000
--- a/Documentation/DocBook/device-drivers.tmpl
+++ /dev/null
@@ -1,521 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
- "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []>
-
-<book id="LinuxDriversAPI">
- <bookinfo>
- <title>Linux Device Drivers</title>
-
- <legalnotice>
- <para>
- This documentation is free software; you can redistribute
- it and/or modify it under the terms of the GNU General Public
- License as published by the Free Software Foundation; either
- version 2 of the License, or (at your option) any later
- version.
- </para>
-
- <para>
- This program is distributed in the hope that it will be
- useful, but WITHOUT ANY WARRANTY; without even the implied
- warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
- See the GNU General Public License for more details.
- </para>
-
- <para>
- You should have received a copy of the GNU General Public
- License along with this program; if not, write to the Free
- Software Foundation, Inc., 59 Temple Place, Suite 330, Boston,
- MA 02111-1307 USA
- </para>
-
- <para>
- For more details see the file COPYING in the source
- distribution of Linux.
- </para>
- </legalnotice>
- </bookinfo>
-
-<toc></toc>
-
- <chapter id="Basics">
- <title>Driver Basics</title>
- <sect1><title>Driver Entry and Exit points</title>
-!Iinclude/linux/init.h
- </sect1>
-
- <sect1><title>Atomic and pointer manipulation</title>
-!Iarch/x86/include/asm/atomic.h
- </sect1>
-
- <sect1><title>Delaying, scheduling, and timer routines</title>
-!Iinclude/linux/sched.h
-!Ekernel/sched/core.c
-!Ikernel/sched/cpupri.c
-!Ikernel/sched/fair.c
-!Iinclude/linux/completion.h
-!Ekernel/time/timer.c
- </sect1>
- <sect1><title>Wait queues and Wake events</title>
-!Iinclude/linux/wait.h
-!Ekernel/sched/wait.c
- </sect1>
- <sect1><title>High-resolution timers</title>
-!Iinclude/linux/ktime.h
-!Iinclude/linux/hrtimer.h
-!Ekernel/time/hrtimer.c
- </sect1>
- <sect1><title>Workqueues and Kevents</title>
-!Iinclude/linux/workqueue.h
-!Ekernel/workqueue.c
- </sect1>
- <sect1><title>Internal Functions</title>
-!Ikernel/exit.c
-!Ikernel/signal.c
-!Iinclude/linux/kthread.h
-!Ekernel/kthread.c
- </sect1>
-
- <sect1><title>Kernel objects manipulation</title>
-<!--
-X!Iinclude/linux/kobject.h
--->
-!Elib/kobject.c
- </sect1>
-
- <sect1><title>Kernel utility functions</title>
-!Iinclude/linux/kernel.h
-!Ekernel/printk/printk.c
-!Ekernel/panic.c
-!Ekernel/sys.c
-!Ekernel/rcu/srcu.c
-!Ekernel/rcu/tree.c
-!Ekernel/rcu/tree_plugin.h
-!Ekernel/rcu/update.c
- </sect1>
-
- <sect1><title>Device Resource Management</title>
-!Edrivers/base/devres.c
- </sect1>
-
- </chapter>
-
- <chapter id="devdrivers">
- <title>Device drivers infrastructure</title>
- <sect1><title>The Basic Device Driver-Model Structures </title>
-!Iinclude/linux/device.h
- </sect1>
- <sect1><title>Device Drivers Base</title>
-!Idrivers/base/init.c
-!Edrivers/base/driver.c
-!Edrivers/base/core.c
-!Edrivers/base/syscore.c
-!Edrivers/base/class.c
-!Idrivers/base/node.c
-!Edrivers/base/firmware_class.c
-!Edrivers/base/transport_class.c
-<!-- Cannot be included, because
- attribute_container_add_class_device_adapter
- and attribute_container_classdev_to_container
- exceed allowed 44 characters maximum
-X!Edrivers/base/attribute_container.c
--->
-!Edrivers/base/dd.c
-<!--
-X!Edrivers/base/interface.c
--->
-!Iinclude/linux/platform_device.h
-!Edrivers/base/platform.c
-!Edrivers/base/bus.c
- </sect1>
- <sect1>
- <title>Buffer Sharing and Synchronization</title>
- <para>
- The dma-buf subsystem provides the framework for sharing buffers
- for hardware (DMA) access across multiple device drivers and
- subsystems, and for synchronizing asynchronous hardware access.
- </para>
- <para>
- This is used, for example, by drm "prime" multi-GPU support, but
- is of course not limited to GPU use cases.
- </para>
- <para>
- The three main components of this are: (1) dma-buf, representing
- a sg_table and exposed to userspace as a file descriptor to allow
- passing between devices, (2) fence, which provides a mechanism
- to signal when one device as finished access, and (3) reservation,
- which manages the shared or exclusive fence(s) associated with
- the buffer.
- </para>
- <sect2><title>dma-buf</title>
-!Edrivers/dma-buf/dma-buf.c
-!Iinclude/linux/dma-buf.h
- </sect2>
- <sect2><title>reservation</title>
-!Pdrivers/dma-buf/reservation.c Reservation Object Overview
-!Edrivers/dma-buf/reservation.c
-!Iinclude/linux/reservation.h
- </sect2>
- <sect2><title>fence</title>
-!Edrivers/dma-buf/fence.c
-!Iinclude/linux/fence.h
-!Edrivers/dma-buf/seqno-fence.c
-!Iinclude/linux/seqno-fence.h
-!Edrivers/dma-buf/fence-array.c
-!Iinclude/linux/fence-array.h
-!Edrivers/dma-buf/reservation.c
-!Iinclude/linux/reservation.h
-!Edrivers/dma-buf/sync_file.c
-!Iinclude/linux/sync_file.h
- </sect2>
- </sect1>
- <sect1><title>Device Drivers DMA Management</title>
-!Edrivers/base/dma-coherent.c
-!Edrivers/base/dma-mapping.c
- </sect1>
- <sect1><title>Device Drivers Power Management</title>
-!Edrivers/base/power/main.c
- </sect1>
- <sect1><title>Device Drivers ACPI Support</title>
-<!-- Internal functions only
-X!Edrivers/acpi/sleep/main.c
-X!Edrivers/acpi/sleep/wakeup.c
-X!Edrivers/acpi/motherboard.c
-X!Edrivers/acpi/bus.c
--->
-!Edrivers/acpi/scan.c
-!Idrivers/acpi/scan.c
-<!-- No correct structured comments
-X!Edrivers/acpi/pci_bind.c
--->
- </sect1>
- <sect1><title>Device drivers PnP support</title>
-!Idrivers/pnp/core.c
-<!-- No correct structured comments
-X!Edrivers/pnp/system.c
- -->
-!Edrivers/pnp/card.c
-!Idrivers/pnp/driver.c
-!Edrivers/pnp/manager.c
-!Edrivers/pnp/support.c
- </sect1>
- <sect1><title>Userspace IO devices</title>
-!Edrivers/uio/uio.c
-!Iinclude/linux/uio_driver.h
- </sect1>
- </chapter>
-
- <chapter id="parportdev">
- <title>Parallel Port Devices</title>
-!Iinclude/linux/parport.h
-!Edrivers/parport/ieee1284.c
-!Edrivers/parport/share.c
-!Idrivers/parport/daisy.c
- </chapter>
-
- <chapter id="message_devices">
- <title>Message-based devices</title>
- <sect1><title>Fusion message devices</title>
-!Edrivers/message/fusion/mptbase.c
-!Idrivers/message/fusion/mptbase.c
-!Edrivers/message/fusion/mptscsih.c
-!Idrivers/message/fusion/mptscsih.c
-!Idrivers/message/fusion/mptctl.c
-!Idrivers/message/fusion/mptspi.c
-!Idrivers/message/fusion/mptfc.c
-!Idrivers/message/fusion/mptlan.c
- </sect1>
- </chapter>
-
- <chapter id="snddev">
- <title>Sound Devices</title>
-!Iinclude/sound/core.h
-!Esound/sound_core.c
-!Iinclude/sound/pcm.h
-!Esound/core/pcm.c
-!Esound/core/device.c
-!Esound/core/info.c
-!Esound/core/rawmidi.c
-!Esound/core/sound.c
-!Esound/core/memory.c
-!Esound/core/pcm_memory.c
-!Esound/core/init.c
-!Esound/core/isadma.c
-!Esound/core/control.c
-!Esound/core/pcm_lib.c
-!Esound/core/hwdep.c
-!Esound/core/pcm_native.c
-!Esound/core/memalloc.c
-<!-- FIXME: Removed for now since no structured comments in source
-X!Isound/sound_firmware.c
--->
- </chapter>
-
-
- <chapter id="uart16x50">
- <title>16x50 UART Driver</title>
-!Edrivers/tty/serial/serial_core.c
-!Edrivers/tty/serial/8250/8250_core.c
- </chapter>
-
- <chapter id="fbdev">
- <title>Frame Buffer Library</title>
-
- <para>
- The frame buffer drivers depend heavily on four data structures.
- These structures are declared in include/linux/fb.h. They are
- fb_info, fb_var_screeninfo, fb_fix_screeninfo and fb_monospecs.
- The last three can be made available to and from userland.
- </para>
-
- <para>
- fb_info defines the current state of a particular video card.
- Inside fb_info, there exists a fb_ops structure which is a
- collection of needed functions to make fbdev and fbcon work.
- fb_info is only visible to the kernel.
- </para>
-
- <para>
- fb_var_screeninfo is used to describe the features of a video card
- that are user defined. With fb_var_screeninfo, things such as
- depth and the resolution may be defined.
- </para>
-
- <para>
- The next structure is fb_fix_screeninfo. This defines the
- properties of a card that are created when a mode is set and can't
- be changed otherwise. A good example of this is the start of the
- frame buffer memory. This "locks" the address of the frame buffer
- memory, so that it cannot be changed or moved.
- </para>
-
- <para>
- The last structure is fb_monospecs. In the old API, there was
- little importance for fb_monospecs. This allowed for forbidden things
- such as setting a mode of 800x600 on a fix frequency monitor. With
- the new API, fb_monospecs prevents such things, and if used
- correctly, can prevent a monitor from being cooked. fb_monospecs
- will not be useful until kernels 2.5.x.
- </para>
-
- <sect1><title>Frame Buffer Memory</title>
-!Edrivers/video/fbdev/core/fbmem.c
- </sect1>
-<!--
- <sect1><title>Frame Buffer Console</title>
-X!Edrivers/video/console/fbcon.c
- </sect1>
--->
- <sect1><title>Frame Buffer Colormap</title>
-!Edrivers/video/fbdev/core/fbcmap.c
- </sect1>
-<!-- FIXME:
- drivers/video/fbgen.c has no docs, which stuffs up the sgml. Comment
- out until somebody adds docs. KAO
- <sect1><title>Frame Buffer Generic Functions</title>
-X!Idrivers/video/fbgen.c
- </sect1>
-KAO -->
- <sect1><title>Frame Buffer Video Mode Database</title>
-!Idrivers/video/fbdev/core/modedb.c
-!Edrivers/video/fbdev/core/modedb.c
- </sect1>
- <sect1><title>Frame Buffer Macintosh Video Mode Database</title>
-!Edrivers/video/fbdev/macmodes.c
- </sect1>
- <sect1><title>Frame Buffer Fonts</title>
- <para>
- Refer to the file lib/fonts/fonts.c for more information.
- </para>
-<!-- FIXME: Removed for now since no structured comments in source
-X!Ilib/fonts/fonts.c
--->
- </sect1>
- </chapter>
-
- <chapter id="input_subsystem">
- <title>Input Subsystem</title>
- <sect1><title>Input core</title>
-!Iinclude/linux/input.h
-!Edrivers/input/input.c
-!Edrivers/input/ff-core.c
-!Edrivers/input/ff-memless.c
- </sect1>
- <sect1><title>Multitouch Library</title>
-!Iinclude/linux/input/mt.h
-!Edrivers/input/input-mt.c
- </sect1>
- <sect1><title>Polled input devices</title>
-!Iinclude/linux/input-polldev.h
-!Edrivers/input/input-polldev.c
- </sect1>
- <sect1><title>Matrix keyboards/keypads</title>
-!Iinclude/linux/input/matrix_keypad.h
- </sect1>
- <sect1><title>Sparse keymap support</title>
-!Iinclude/linux/input/sparse-keymap.h
-!Edrivers/input/sparse-keymap.c
- </sect1>
- </chapter>
-
- <chapter id="spi">
- <title>Serial Peripheral Interface (SPI)</title>
- <para>
- SPI is the "Serial Peripheral Interface", widely used with
- embedded systems because it is a simple and efficient
- interface: basically a multiplexed shift register.
- Its three signal wires hold a clock (SCK, often in the range
- of 1-20 MHz), a "Master Out, Slave In" (MOSI) data line, and
- a "Master In, Slave Out" (MISO) data line.
- SPI is a full duplex protocol; for each bit shifted out the
- MOSI line (one per clock) another is shifted in on the MISO line.
- Those bits are assembled into words of various sizes on the
- way to and from system memory.
- An additional chipselect line is usually active-low (nCS);
- four signals are normally used for each peripheral, plus
- sometimes an interrupt.
- </para>
- <para>
- The SPI bus facilities listed here provide a generalized
- interface to declare SPI busses and devices, manage them
- according to the standard Linux driver model, and perform
- input/output operations.
- At this time, only "master" side interfaces are supported,
- where Linux talks to SPI peripherals and does not implement
- such a peripheral itself.
- (Interfaces to support implementing SPI slaves would
- necessarily look different.)
- </para>
- <para>
- The programming interface is structured around two kinds of driver,
- and two kinds of device.
- A "Controller Driver" abstracts the controller hardware, which may
- be as simple as a set of GPIO pins or as complex as a pair of FIFOs
- connected to dual DMA engines on the other side of the SPI shift
- register (maximizing throughput). Such drivers bridge between
- whatever bus they sit on (often the platform bus) and SPI, and
- expose the SPI side of their device as a
- <structname>struct spi_master</structname>.
- SPI devices are children of that master, represented as a
- <structname>struct spi_device</structname> and manufactured from
- <structname>struct spi_board_info</structname> descriptors which
- are usually provided by board-specific initialization code.
- A <structname>struct spi_driver</structname> is called a
- "Protocol Driver", and is bound to a spi_device using normal
- driver model calls.
- </para>
- <para>
- The I/O model is a set of queued messages. Protocol drivers
- submit one or more <structname>struct spi_message</structname>
- objects, which are processed and completed asynchronously.
- (There are synchronous wrappers, however.) Messages are
- built from one or more <structname>struct spi_transfer</structname>
- objects, each of which wraps a full duplex SPI transfer.
- A variety of protocol tweaking options are needed, because
- different chips adopt very different policies for how they
- use the bits transferred with SPI.
- </para>
-!Iinclude/linux/spi/spi.h
-!Fdrivers/spi/spi.c spi_register_board_info
-!Edrivers/spi/spi.c
- </chapter>
-
- <chapter id="i2c">
- <title>I<superscript>2</superscript>C and SMBus Subsystem</title>
-
- <para>
- I<superscript>2</superscript>C (or without fancy typography, "I2C")
- is an acronym for the "Inter-IC" bus, a simple bus protocol which is
- widely used where low data rate communications suffice.
- Since it's also a licensed trademark, some vendors use another
- name (such as "Two-Wire Interface", TWI) for the same bus.
- I2C only needs two signals (SCL for clock, SDA for data), conserving
- board real estate and minimizing signal quality issues.
- Most I2C devices use seven bit addresses, and bus speeds of up
- to 400 kHz; there's a high speed extension (3.4 MHz) that's not yet
- found wide use.
- I2C is a multi-master bus; open drain signaling is used to
- arbitrate between masters, as well as to handshake and to
- synchronize clocks from slower clients.
- </para>
-
- <para>
- The Linux I2C programming interfaces support only the master
- side of bus interactions, not the slave side.
- The programming interface is structured around two kinds of driver,
- and two kinds of device.
- An I2C "Adapter Driver" abstracts the controller hardware; it binds
- to a physical device (perhaps a PCI device or platform_device) and
- exposes a <structname>struct i2c_adapter</structname> representing
- each I2C bus segment it manages.
- On each I2C bus segment will be I2C devices represented by a
- <structname>struct i2c_client</structname>. Those devices will
- be bound to a <structname>struct i2c_driver</structname>,
- which should follow the standard Linux driver model.
- (At this writing, a legacy model is more widely used.)
- There are functions to perform various I2C protocol operations; at
- this writing all such functions are usable only from task context.
- </para>
-
- <para>
- The System Management Bus (SMBus) is a sibling protocol. Most SMBus
- systems are also I2C conformant. The electrical constraints are
- tighter for SMBus, and it standardizes particular protocol messages
- and idioms. Controllers that support I2C can also support most
- SMBus operations, but SMBus controllers don't support all the protocol
- options that an I2C controller will.
- There are functions to perform various SMBus protocol operations,
- either using I2C primitives or by issuing SMBus commands to
- i2c_adapter devices which don't support those I2C operations.
- </para>
-
-!Iinclude/linux/i2c.h
-!Fdrivers/i2c/i2c-boardinfo.c i2c_register_board_info
-!Edrivers/i2c/i2c-core.c
- </chapter>
-
- <chapter id="hsi">
- <title>High Speed Synchronous Serial Interface (HSI)</title>
-
- <para>
- High Speed Synchronous Serial Interface (HSI) is a
- serial interface mainly used for connecting application
- engines (APE) with cellular modem engines (CMT) in cellular
- handsets.
-
- HSI provides multiplexing for up to 16 logical channels,
- low-latency and full duplex communication.
- </para>
-
-!Iinclude/linux/hsi/hsi.h
-!Edrivers/hsi/hsi_core.c
- </chapter>
-
- <chapter id="pwm">
- <title>Pulse-Width Modulation (PWM)</title>
- <para>
- Pulse-width modulation is a modulation technique primarily used to
- control power supplied to electrical devices.
- </para>
- <para>
- The PWM framework provides an abstraction for providers and consumers
- of PWM signals. A controller that provides one or more PWM signals is
- registered as <structname>struct pwm_chip</structname>. Providers are
- expected to embed this structure in a driver-specific structure. This
- structure contains fields that describe a particular chip.
- </para>
- <para>
- A chip exposes one or more PWM signal sources, each of which exposed
- as a <structname>struct pwm_device</structname>. Operations can be
- performed on PWM devices to control the period, duty cycle, polarity
- and active state of the signal.
- </para>
- <para>
- Note that PWM devices are exclusive resources: they can always only be
- used by one consumer at a time.
- </para>
-!Iinclude/linux/pwm.h
-!Edrivers/pwm/core.c
- </chapter>
-
-</book>
diff --git a/Documentation/DocBook/kernel-hacking.tmpl b/Documentation/DocBook/kernel-hacking.tmpl
index 589b40c..2a27227 100644
--- a/Documentation/DocBook/kernel-hacking.tmpl
+++ b/Documentation/DocBook/kernel-hacking.tmpl
@@ -483,7 +483,7 @@
<function>get_user()</function>
/
<function>put_user()</function>
- <filename class="headerfile">include/asm/uaccess.h</filename>
+ <filename class="headerfile">include/linux/uaccess.h</filename>
</title>
<para>
diff --git a/Documentation/HOWTO b/Documentation/HOWTO
index 1f345da..5f04234 100644
--- a/Documentation/HOWTO
+++ b/Documentation/HOWTO
@@ -1,5 +1,5 @@
HOWTO do Linux kernel development
----------------------------------
+=================================
This is the be-all, end-all document on this topic. It contains
instructions on how to become a Linux kernel developer and how to learn
@@ -28,6 +28,7 @@
you plan to do low-level development for that architecture. Though they
are not a good substitute for a solid C education and/or years of
experience, the following books are good for, if anything, reference:
+
- "The C Programming Language" by Kernighan and Ritchie [Prentice Hall]
- "Practical C Programming" by Steve Oualline [O'Reilly]
- "C: A Reference Manual" by Harbison and Steele [Prentice Hall]
@@ -64,7 +65,8 @@
their statements on legal matters.
For common questions and answers about the GPL, please see:
- http://www.gnu.org/licenses/gpl-faq.html
+
+ https://www.gnu.org/licenses/gpl-faq.html
Documentation
@@ -82,96 +84,118 @@
Here is a list of files that are in the kernel source tree that are
required reading:
+
README
This file gives a short background on the Linux kernel and describes
what is necessary to do to configure and build the kernel. People
who are new to the kernel should start here.
- Documentation/Changes
+ :ref:`Documentation/Changes <changes>`
This file gives a list of the minimum levels of various software
packages that are necessary to build and run the kernel
successfully.
- Documentation/CodingStyle
+ :ref:`Documentation/CodingStyle <codingstyle>`
This describes the Linux kernel coding style, and some of the
rationale behind it. All new code is expected to follow the
guidelines in this document. Most maintainers will only accept
patches if these rules are followed, and many people will only
review code if it is in the proper style.
- Documentation/SubmittingPatches
- Documentation/SubmittingDrivers
+ :ref:`Documentation/SubmittingPatches <submittingpatches>` and :ref:`Documentation/SubmittingDrivers <submittingdrivers>`
These files describe in explicit detail how to successfully create
and send a patch, including (but not limited to):
+
- Email contents
- Email format
- Who to send it to
+
Following these rules will not guarantee success (as all patches are
subject to scrutiny for content and style), but not following them
will almost always prevent it.
Other excellent descriptions of how to create patches properly are:
+
"The Perfect Patch"
- http://www.ozlabs.org/~akpm/stuff/tpp.txt
+ https://www.ozlabs.org/~akpm/stuff/tpp.txt
+
"Linux kernel patch submission format"
http://linux.yyz.us/patch-format.html
- Documentation/stable_api_nonsense.txt
+ :ref:`Documentation/stable_api_nonsense.txt <stable_api_nonsense>`
This file describes the rationale behind the conscious decision to
not have a stable API within the kernel, including things like:
+
- Subsystem shim-layers (for compatibility?)
- Driver portability between Operating Systems.
- Mitigating rapid change within the kernel source tree (or
preventing rapid change)
+
This document is crucial for understanding the Linux development
philosophy and is very important for people moving to Linux from
development on other Operating Systems.
- Documentation/SecurityBugs
+ :ref:`Documentation/SecurityBugs <securitybugs>`
If you feel you have found a security problem in the Linux kernel,
please follow the steps in this document to help notify the kernel
developers, and help solve the issue.
- Documentation/ManagementStyle
+ :ref:`Documentation/ManagementStyle <managementstyle>`
This document describes how Linux kernel maintainers operate and the
shared ethos behind their methodologies. This is important reading
for anyone new to kernel development (or anyone simply curious about
it), as it resolves a lot of common misconceptions and confusion
about the unique behavior of kernel maintainers.
- Documentation/stable_kernel_rules.txt
+ :ref:`Documentation/stable_kernel_rules.txt <stable_kernel_rules>`
This file describes the rules on how the stable kernel releases
happen, and what to do if you want to get a change into one of these
releases.
- Documentation/kernel-docs.txt
+ :ref:`Documentation/kernel-docs.txt <kernel_docs>`
A list of external documentation that pertains to kernel
development. Please consult this list if you do not find what you
are looking for within the in-kernel documentation.
- Documentation/applying-patches.txt
+ :ref:`Documentation/applying-patches.txt <applying_patches>`
A good introduction describing exactly what a patch is and how to
apply it to the different development branches of the kernel.
The kernel also has a large number of documents that can be
-automatically generated from the source code itself. This includes a
+automatically generated from the source code itself or from
+ReStructuredText markups (ReST), like this one. This includes a
full description of the in-kernel API, and rules on how to handle
-locking properly. The documents will be created in the
-Documentation/DocBook/ directory and can be generated as PDF,
-Postscript, HTML, and man pages by running:
+locking properly.
+
+All such documents can be generated as PDF or HTML by running::
+
make pdfdocs
- make psdocs
make htmldocs
- make mandocs
+
respectively from the main kernel source directory.
+The documents that uses ReST markup will be generated at Documentation/output.
+They can also be generated on LaTeX and ePub formats with::
+
+ make latexdocs
+ make epubdocs
+
+Currently, there are some documents written on DocBook that are in
+the process of conversion to ReST. Such documents will be created in the
+Documentation/DocBook/ directory and can be generated also as
+Postscript or man pages by running::
+
+ make psdocs
+ make mandocs
Becoming A Kernel Developer
---------------------------
If you do not know anything about Linux kernel development, you should
look at the Linux KernelNewbies project:
- http://kernelnewbies.org
+
+ https://kernelnewbies.org
+
It consists of a helpful mailing list where you can ask almost any type
of basic kernel development question (make sure to search the archives
first, before asking something that has already been answered in the
@@ -187,7 +211,9 @@
If you do not know where you want to start, but you want to look for
some task to start doing to join into the kernel development community,
go to the Linux Kernel Janitor's project:
- http://kernelnewbies.org/KernelJanitors
+
+ https://kernelnewbies.org/KernelJanitors
+
It is a great place to start. It describes a list of relatively simple
problems that need to be cleaned up and fixed within the Linux kernel
source tree. Working with the developers in charge of this project, you
@@ -199,7 +225,8 @@
tree, but need some help getting it in the proper form, the
kernel-mentors project was created to help you out with this. It is a
mailing list, and can be found at:
- http://selenic.com/mailman/listinfo/kernel-mentors
+
+ https://selenic.com/mailman/listinfo/kernel-mentors
Before making any actual modifications to the Linux kernel code, it is
imperative to understand how the code in question works. For this
@@ -209,6 +236,7 @@
Cross-Reference project, which is able to present source code in a
self-referential, indexed webpage format. An excellent up-to-date
repository of the kernel code may be found at:
+
http://lxr.free-electrons.com/
@@ -218,6 +246,7 @@
Linux kernel development process currently consists of a few different
main kernel "branches" and lots of different subsystem-specific kernel
branches. These different branches are:
+
- main 4.x kernel tree
- 4.x.y -stable kernel tree
- 4.x -git kernel patches
@@ -227,14 +256,15 @@
4.x kernel tree
-----------------
4.x kernels are maintained by Linus Torvalds, and can be found on
-kernel.org in the pub/linux/kernel/v4.x/ directory. Its development
+https://kernel.org in the pub/linux/kernel/v4.x/ directory. Its development
process is as follows:
+
- As soon as a new kernel is released a two weeks window is open,
during this period of time maintainers can submit big diffs to
Linus, usually the patches that have already been included in the
-next kernel for a few weeks. The preferred way to submit big changes
is using git (the kernel's source management tool, more information
- can be found at http://git-scm.com/) but plain patches are also just
+ can be found at https://git-scm.com/) but plain patches are also just
fine.
- After two weeks a -rc1 kernel is released it is now possible to push
only patches that do not include new features that could affect the
@@ -253,9 +283,10 @@
It is worth mentioning what Andrew Morton wrote on the linux-kernel
mailing list about kernel releases:
- "Nobody knows when a kernel will be released, because it's
+
+ *"Nobody knows when a kernel will be released, because it's
released according to perceived bug status, not according to a
- preconceived timeline."
+ preconceived timeline."*
4.x.y -stable kernel tree
-------------------------
@@ -301,7 +332,7 @@
Most of these repositories are git trees, but there are also other SCMs
in use, or patch queues being published as quilt series. Addresses of
these subsystem repositories are listed in the MAINTAINERS file. Many
-of them can be browsed at http://git.kernel.org/.
+of them can be browsed at https://git.kernel.org/.
Before a proposed patch is committed to such a subsystem tree, it is
subject to review which primarily happens on mailing lists (see the
@@ -310,7 +341,7 @@
interface which shows patch postings, any comments on a patch or
revisions to it, and maintainers can mark patches as under review,
accepted, or rejected. Most of these patchwork sites are listed at
-http://patchwork.kernel.org/.
+https://patchwork.kernel.org/.
4.x -next kernel tree for integration tests
-------------------------------------------
@@ -318,7 +349,8 @@
tree, they need to be integration-tested. For this purpose, a special
testing repository exists into which virtually all subsystem trees are
pulled on an almost daily basis:
- http://git.kernel.org/?p=linux/kernel/git/next/linux-next.git
+
+ https://git.kernel.org/?p=linux/kernel/git/next/linux-next.git
This way, the -next kernel gives a summary outlook onto what will be
expected to go into the mainline kernel at the next merge period.
@@ -328,10 +360,11 @@
Bug Reporting
-------------
-bugzilla.kernel.org is where the Linux kernel developers track kernel
+https://bugzilla.kernel.org is where the Linux kernel developers track kernel
bugs. Users are encouraged to report all bugs that they find in this
tool. For details on how to use the kernel bugzilla, please see:
- http://bugzilla.kernel.org/page.cgi?id=faq.html
+
+ https://bugzilla.kernel.org/page.cgi?id=faq.html
The file REPORTING-BUGS in the main kernel source directory has a good
template for how to report a possible kernel bug, and details what kind
@@ -349,13 +382,14 @@
bugs is one of the best ways to get merits among other developers, because
not many people like wasting time fixing other people's bugs.
-To work in the already reported bug reports, go to http://bugzilla.kernel.org.
+To work in the already reported bug reports, go to https://bugzilla.kernel.org.
If you want to be advised of the future bug reports, you can subscribe to the
bugme-new mailing list (only new bug reports are mailed here) or to the
bugme-janitor mailing list (every change in the bugzilla is mailed here)
- http://lists.linux-foundation.org/mailman/listinfo/bugme-new
- http://lists.linux-foundation.org/mailman/listinfo/bugme-janitors
+ https://lists.linux-foundation.org/mailman/listinfo/bugme-new
+
+ https://lists.linux-foundation.org/mailman/listinfo/bugme-janitors
@@ -365,10 +399,14 @@
As some of the above documents describe, the majority of the core kernel
developers participate on the Linux Kernel Mailing list. Details on how
to subscribe and unsubscribe from the list can be found at:
+
http://vger.kernel.org/vger-lists.html#linux-kernel
+
There are archives of the mailing list on the web in many different
places. Use a search engine to find these archives. For example:
+
http://dir.gmane.org/gmane.linux.kernel
+
It is highly recommended that you search the archives about the topic
you want to bring up, before you post it to the list. A lot of things
already discussed in detail are only recorded at the mailing list
@@ -381,11 +419,13 @@
Many of the lists are hosted on kernel.org. Information on them can be
found at:
+
http://vger.kernel.org/vger-lists.html
Please remember to follow good behavioral habits when using the lists.
Though a bit cheesy, the following URL has some simple guidelines for
interacting with the list (or any list):
+
http://www.albion.com/netiquette/
If multiple people respond to your mail, the CC: list of recipients may
@@ -400,13 +440,14 @@
writing at the top of the mail.
If you add patches to your mail, make sure they are plain readable text
-as stated in Documentation/SubmittingPatches. Kernel developers don't
-want to deal with attachments or compressed patches; they may want
-to comment on individual lines of your patch, which works only that way.
-Make sure you use a mail program that does not mangle spaces and tab
-characters. A good first test is to send the mail to yourself and try
-to apply your own patch by yourself. If that doesn't work, get your
-mail program fixed or change it until it works.
+as stated in Documentation/SubmittingPatches.
+Kernel developers don't want to deal with
+attachments or compressed patches; they may want to comment on
+individual lines of your patch, which works only that way. Make sure you
+use a mail program that does not mangle spaces and tab characters. A
+good first test is to send the mail to yourself and try to apply your
+own patch by yourself. If that doesn't work, get your mail program fixed
+or change it until it works.
Above all, please remember to show respect to other subscribers.
@@ -418,6 +459,7 @@
there is. When you submit a patch for acceptance, it will be reviewed
on its technical merits and those alone. So, what should you be
expecting?
+
- criticism
- comments
- requests for change
@@ -432,6 +474,7 @@
again, sometimes things get lost in the huge volume.
What should you not do?
+
- expect your patch to be accepted without question
- become defensive
- ignore comments
@@ -445,8 +488,8 @@
toward a solution that is right.
It is normal that the answers to your first patch might simply be a list
-of a dozen things you should correct. This does _not_ imply that your
-patch will not be accepted, and it is _not_ meant against you
+of a dozen things you should correct. This does **not** imply that your
+patch will not be accepted, and it is **not** meant against you
personally. Simply correct all issues raised against your patch and
resend it.
@@ -457,7 +500,9 @@
The kernel community works differently than most traditional corporate
development environments. Here are a list of things that you can try to
do to avoid problems:
+
Good things to say regarding your proposed changes:
+
- "This solves multiple problems."
- "This deletes 2000 lines of code."
- "Here is a patch that explains what I am trying to describe."
@@ -466,6 +511,7 @@
- "This increases performance on typical machines..."
Bad things you should avoid saying:
+
- "We did it this way in AIX/ptx/Solaris, so therefore it must be
good..."
- "I've being doing this for 20 years, so..."
@@ -527,17 +573,18 @@
and simplify (or simply re-order) patches before submitting them.
Here is an analogy from kernel developer Al Viro:
- "Think of a teacher grading homework from a math student. The
+
+ *"Think of a teacher grading homework from a math student. The
teacher does not want to see the student's trials and errors
before they came up with the solution. They want to see the
cleanest, most elegant answer. A good student knows this, and
would never submit her intermediate work before the final
- solution."
+ solution.*
- The same is true of kernel development. The maintainers and
+ *The same is true of kernel development. The maintainers and
reviewers do not want to see the thought process behind the
solution to the problem one is solving. They want to see a
- simple and elegant solution."
+ simple and elegant solution."*
It may be challenging to keep the balance between presenting an elegant
solution and working together with the community and discussing your
@@ -565,6 +612,7 @@
the text in your email. This information will become the ChangeLog
information for the patch, and will be preserved for everyone to see for
all time. It should describe the patch completely, containing:
+
- why the change is necessary
- the overall design approach in the patch
- implementation details
@@ -572,12 +620,11 @@
For more details on what this should all look like, please see the
ChangeLog section of the document:
+
"The Perfect Patch"
http://www.ozlabs.org/~akpm/stuff/tpp.txt
-
-
All of these things are sometimes very hard to do. It can take years to
perfect these practices (if at all). It's a continuous process of
improvement that requires a lot of patience and determination. But
@@ -588,8 +635,9 @@
----------
+
Thanks to Paolo Ciarrocchi who allowed the "Development Process"
-(http://lwn.net/Articles/94386/) section
+(https://lwn.net/Articles/94386/) section
to be based on text he had written, and to Randy Dunlap and Gerrit
Huizenga for some of the list of things you should and should not say.
Also thanks to Pat Mochel, Hanna Linder, Randy Dunlap, Kay Sievers,
diff --git a/Documentation/Makefile b/Documentation/Makefile
index de955e1..c2a4691 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -1,3 +1 @@
-subdir-y := accounting auxdisplay blackfin \
- filesystems filesystems ia64 laptops mic misc-devices \
- networking pcmcia prctl ptp timers vDSO watchdog
+subdir-y :=
diff --git a/Documentation/Makefile.sphinx b/Documentation/Makefile.sphinx
index 857f1e2..92deea3 100644
--- a/Documentation/Makefile.sphinx
+++ b/Documentation/Makefile.sphinx
@@ -5,6 +5,9 @@
# You can set these variables from the command line.
SPHINXBUILD = sphinx-build
SPHINXOPTS =
+SPHINXDIRS = .
+_SPHINXDIRS = $(patsubst $(srctree)/Documentation/%/conf.py,%,$(wildcard $(srctree)/Documentation/*/conf.py))
+SPHINX_CONF = conf.py
PAPER =
BUILDDIR = $(obj)/output
@@ -25,38 +28,62 @@
else # HAVE_SPHINX
-# User-friendly check for rst2pdf
-HAVE_RST2PDF := $(shell if python -c "import rst2pdf" >/dev/null 2>&1; then echo 1; else echo 0; fi)
+# User-friendly check for pdflatex
+HAVE_PDFLATEX := $(shell if which xelatex >/dev/null 2>&1; then echo 1; else echo 0; fi)
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
KERNELDOC = $(srctree)/scripts/kernel-doc
KERNELDOC_CONF = -D kerneldoc_srctree=$(srctree) -D kerneldoc_bin=$(KERNELDOC)
-ALLSPHINXOPTS = -D version=$(KERNELVERSION) -D release=$(KERNELRELEASE) -d $(BUILDDIR)/.doctrees $(KERNELDOC_CONF) $(PAPEROPT_$(PAPER)) -c $(srctree)/$(src) $(SPHINXOPTS) $(srctree)/$(src)
+ALLSPHINXOPTS = $(KERNELDOC_CONF) $(PAPEROPT_$(PAPER)) $(SPHINXOPTS)
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
-quiet_cmd_sphinx = SPHINX $@
- cmd_sphinx = BUILDDIR=$(BUILDDIR) $(SPHINXBUILD) -b $2 $(ALLSPHINXOPTS) $(BUILDDIR)/$2
+# commands; the 'cmd' from scripts/Kbuild.include is not *loopable*
+loop_cmd = $(echo-cmd) $(cmd_$(1))
+
+# $2 sphinx builder e.g. "html"
+# $3 name of the build subfolder / e.g. "media", used as:
+# * dest folder relative to $(BUILDDIR) and
+# * cache folder relative to $(BUILDDIR)/.doctrees
+# $4 dest subfolder e.g. "man" for man pages at media/man
+# $5 reST source folder relative to $(srctree)/$(src),
+# e.g. "media" for the linux-tv book-set at ./Documentation/media
+
+quiet_cmd_sphinx = SPHINX $@ --> file://$(abspath $(BUILDDIR)/$3/$4);
+ cmd_sphinx = $(MAKE) BUILDDIR=$(abspath $(BUILDDIR)) $(build)=Documentation/media all;\
+ BUILDDIR=$(abspath $(BUILDDIR)) SPHINX_CONF=$(abspath $(srctree)/$(src)/$5/$(SPHINX_CONF)) \
+ $(SPHINXBUILD) \
+ -b $2 \
+ -c $(abspath $(srctree)/$(src)) \
+ -d $(abspath $(BUILDDIR)/.doctrees/$3) \
+ -D version=$(KERNELVERSION) -D release=$(KERNELRELEASE) \
+ $(ALLSPHINXOPTS) \
+ $(abspath $(srctree)/$(src)/$5) \
+ $(abspath $(BUILDDIR)/$3/$4);
htmldocs:
- $(MAKE) BUILDDIR=$(BUILDDIR) -f $(srctree)/Documentation/media/Makefile $@
- $(call cmd,sphinx,html)
+ @$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,html,$(var),,$(var)))
-pdfdocs:
-ifeq ($(HAVE_RST2PDF),0)
- $(warning The Python 'rst2pdf' module was not found. Make sure you have the module installed to produce PDF output.)
+latexdocs:
+ifeq ($(HAVE_PDFLATEX),0)
+ $(warning The 'xelatex' command was not found. Make sure you have it installed and in PATH to produce PDF output.)
@echo " SKIP Sphinx $@ target."
-else # HAVE_RST2PDF
- $(call cmd,sphinx,pdf)
-endif # HAVE_RST2PDF
+else # HAVE_PDFLATEX
+ @$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,latex,$(var),latex,$(var)))
+endif # HAVE_PDFLATEX
+
+pdfdocs: latexdocs
+ifneq ($(HAVE_PDFLATEX),0)
+ $(foreach var,$(SPHINXDIRS), $(MAKE) PDFLATEX=xelatex LATEXOPTS="-interaction=nonstopmode" -C $(BUILDDIR)/$(var)/latex)
+endif # HAVE_PDFLATEX
epubdocs:
- $(call cmd,sphinx,epub)
+ @$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,epub,$(var),epub,$(var)))
xmldocs:
- $(call cmd,sphinx,xml)
+ @$(foreach var,$(SPHINXDIRS),$(call loop_cmd,sphinx,xml,$(var),xml,$(var)))
# no-ops for the Sphinx toolchain
sgmldocs:
@@ -72,7 +99,14 @@
dochelp:
@echo ' Linux kernel internal documentation in different formats (Sphinx):'
@echo ' htmldocs - HTML'
+ @echo ' latexdocs - LaTeX'
@echo ' pdfdocs - PDF'
@echo ' epubdocs - EPUB'
@echo ' xmldocs - XML'
@echo ' cleandocs - clean all generated files'
+ @echo
+ @echo ' make SPHINXDIRS="s1 s2" [target] Generate only docs of folder s1, s2'
+ @echo ' valid values for SPHINXDIRS are: $(_SPHINXDIRS)'
+ @echo
+ @echo ' make SPHINX_CONF={conf-file} [target] use *additional* sphinx-build'
+ @echo ' configuration. This is e.g. useful to build with nit-picking config.'
diff --git a/Documentation/ManagementStyle b/Documentation/ManagementStyle
index a211ee8..dea2e66 100644
--- a/Documentation/ManagementStyle
+++ b/Documentation/ManagementStyle
@@ -1,10 +1,12 @@
+.. _managementstyle:
- Linux kernel management style
+Linux kernel management style
+=============================
This is a short document describing the preferred (or made up, depending
on who you ask) management style for the linux kernel. It's meant to
mirror the CodingStyle document to some degree, and mainly written to
-avoid answering (*) the same (or similar) questions over and over again.
+avoid answering [#f1]_ the same (or similar) questions over and over again.
Management style is very personal and much harder to quantify than
simple coding style rules, so this document may or may not have anything
@@ -14,50 +16,52 @@
Btw, when talking about "kernel manager", it's all about the technical
lead persons, not the people who do traditional management inside
companies. If you sign purchase orders or you have any clue about the
-budget of your group, you're almost certainly not a kernel manager.
-These suggestions may or may not apply to you.
+budget of your group, you're almost certainly not a kernel manager.
+These suggestions may or may not apply to you.
First off, I'd suggest buying "Seven Habits of Highly Effective
-People", and NOT read it. Burn it, it's a great symbolic gesture.
+People", and NOT read it. Burn it, it's a great symbolic gesture.
-(*) This document does so not so much by answering the question, but by
-making it painfully obvious to the questioner that we don't have a clue
-to what the answer is.
+.. [#f1] This document does so not so much by answering the question, but by
+ making it painfully obvious to the questioner that we don't have a clue
+ to what the answer is.
Anyway, here goes:
+.. _decisions:
- Chapter 1: Decisions
+1) Decisions
+------------
Everybody thinks managers make decisions, and that decision-making is
important. The bigger and more painful the decision, the bigger the
manager must be to make it. That's very deep and obvious, but it's not
-actually true.
+actually true.
-The name of the game is to _avoid_ having to make a decision. In
+The name of the game is to **avoid** having to make a decision. In
particular, if somebody tells you "choose (a) or (b), we really need you
to decide on this", you're in trouble as a manager. The people you
manage had better know the details better than you, so if they come to
you for a technical decision, you're screwed. You're clearly not
-competent to make that decision for them.
+competent to make that decision for them.
(Corollary:if the people you manage don't know the details better than
-you, you're also screwed, although for a totally different reason.
-Namely that you are in the wrong job, and that _they_ should be managing
-your brilliance instead).
+you, you're also screwed, although for a totally different reason.
+Namely that you are in the wrong job, and that **they** should be managing
+your brilliance instead).
-So the name of the game is to _avoid_ decisions, at least the big and
+So the name of the game is to **avoid** decisions, at least the big and
painful ones. Making small and non-consequential decisions is fine, and
makes you look like you know what you're doing, so what a kernel manager
needs to do is to turn the big and painful ones into small things where
-nobody really cares.
+nobody really cares.
It helps to realize that the key difference between a big decision and a
small one is whether you can fix your decision afterwards. Any decision
can be made small by just always making sure that if you were wrong (and
-you _will_ be wrong), you can always undo the damage later by
+you **will** be wrong), you can always undo the damage later by
backtracking. Suddenly, you get to be doubly managerial for making
-_two_ inconsequential decisions - the wrong one _and_ the right one.
+**two** inconsequential decisions - the wrong one **and** the right one.
And people will even see that as true leadership (*cough* bullshit
*cough*).
@@ -65,10 +69,10 @@
Thus the key to avoiding big decisions becomes to just avoiding to do
things that can't be undone. Don't get ushered into a corner from which
you cannot escape. A cornered rat may be dangerous - a cornered manager
-is just pitiful.
+is just pitiful.
It turns out that since nobody would be stupid enough to ever really let
-a kernel manager have huge fiscal responsibility _anyway_, it's usually
+a kernel manager have huge fiscal responsibility **anyway**, it's usually
fairly easy to backtrack. Since you're not going to be able to waste
huge amounts of money that you might not be able to repay, the only
thing you can backtrack on is a technical decision, and there
@@ -76,113 +80,118 @@
incompetent nincompoop, say you're sorry, and undo all the worthless
work you had people work on for the last year. Suddenly the decision
you made a year ago wasn't a big decision after all, since it could be
-easily undone.
+easily undone.
It turns out that some people have trouble with this approach, for two
reasons:
+
- admitting you were an idiot is harder than it looks. We all like to
maintain appearances, and coming out in public to say that you were
- wrong is sometimes very hard indeed.
+ wrong is sometimes very hard indeed.
- having somebody tell you that what you worked on for the last year
wasn't worthwhile after all can be hard on the poor lowly engineers
- too, and while the actual _work_ was easy enough to undo by just
+ too, and while the actual **work** was easy enough to undo by just
deleting it, you may have irrevocably lost the trust of that
engineer. And remember: "irrevocable" was what we tried to avoid in
the first place, and your decision ended up being a big one after
- all.
+ all.
Happily, both of these reasons can be mitigated effectively by just
admitting up-front that you don't have a friggin' clue, and telling
people ahead of the fact that your decision is purely preliminary, and
might be the wrong thing. You should always reserve the right to change
-your mind, and make people very _aware_ of that. And it's much easier
-to admit that you are stupid when you haven't _yet_ done the really
+your mind, and make people very **aware** of that. And it's much easier
+to admit that you are stupid when you haven't **yet** done the really
stupid thing.
Then, when it really does turn out to be stupid, people just roll their
-eyes and say "Oops, he did it again".
+eyes and say "Oops, he did it again".
This preemptive admission of incompetence might also make the people who
actually do the work also think twice about whether it's worth doing or
-not. After all, if _they_ aren't certain whether it's a good idea, you
+not. After all, if **they** aren't certain whether it's a good idea, you
sure as hell shouldn't encourage them by promising them that what they
work on will be included. Make them at least think twice before they
-embark on a big endeavor.
+embark on a big endeavor.
Remember: they'd better know more about the details than you do, and
they usually already think they have the answer to everything. The best
thing you can do as a manager is not to instill confidence, but rather a
-healthy dose of critical thinking on what they do.
+healthy dose of critical thinking on what they do.
Btw, another way to avoid a decision is to plaintively just whine "can't
we just do both?" and look pitiful. Trust me, it works. If it's not
clear which approach is better, they'll eventually figure it out. The
answer may end up being that both teams get so frustrated by the
-situation that they just give up.
+situation that they just give up.
That may sound like a failure, but it's usually a sign that there was
something wrong with both projects, and the reason the people involved
couldn't decide was that they were both wrong. You end up coming up
smelling like roses, and you avoided yet another decision that you could
-have screwed up on.
+have screwed up on.
- Chapter 2: People
+2) People
+---------
Most people are idiots, and being a manager means you'll have to deal
-with it, and perhaps more importantly, that _they_ have to deal with
-_you_.
+with it, and perhaps more importantly, that **they** have to deal with
+**you**.
It turns out that while it's easy to undo technical mistakes, it's not
as easy to undo personality disorders. You just have to live with
-theirs - and yours.
+theirs - and yours.
However, in order to prepare yourself as a kernel manager, it's best to
remember not to burn any bridges, bomb any innocent villagers, or
alienate too many kernel developers. It turns out that alienating people
is fairly easy, and un-alienating them is hard. Thus "alienating"
immediately falls under the heading of "not reversible", and becomes a
-no-no according to Chapter 1.
+no-no according to :ref:`decisions`.
There's just a few simple rules here:
+
(1) don't call people d*ckheads (at least not in public)
(2) learn how to apologize when you forgot rule (1)
The problem with #1 is that it's very easy to do, since you can say
-"you're a d*ckhead" in millions of different ways (*), sometimes without
+"you're a d*ckhead" in millions of different ways [#f2]_, sometimes without
even realizing it, and almost always with a white-hot conviction that
-you are right.
+you are right.
And the more convinced you are that you are right (and let's face it,
-you can call just about _anybody_ a d*ckhead, and you often _will_ be
-right), the harder it ends up being to apologize afterwards.
+you can call just about **anybody** a d*ckhead, and you often **will** be
+right), the harder it ends up being to apologize afterwards.
To solve this problem, you really only have two options:
+
- get really good at apologies
- spread the "love" out so evenly that nobody really ends up feeling
like they get unfairly targeted. Make it inventive enough, and they
- might even be amused.
+ might even be amused.
The option of being unfailingly polite really doesn't exist. Nobody will
trust somebody who is so clearly hiding his true character.
-(*) Paul Simon sang "Fifty Ways to Leave Your Lover", because quite
-frankly, "A Million Ways to Tell a Developer He Is a D*ckhead" doesn't
-scan nearly as well. But I'm sure he thought about it.
+.. [#f2] Paul Simon sang "Fifty Ways to Leave Your Lover", because quite
+ frankly, "A Million Ways to Tell a Developer He Is a D*ckhead" doesn't
+ scan nearly as well. But I'm sure he thought about it.
- Chapter 3: People II - the Good Kind
+3) People II - the Good Kind
+----------------------------
While it turns out that most people are idiots, the corollary to that is
sadly that you are one too, and that while we can all bask in the secure
knowledge that we're better than the average person (let's face it,
nobody ever believes that they're average or below-average), we should
also admit that we're not the sharpest knife around, and there will be
-other people that are less of an idiot than you are.
+other people that are less of an idiot than you are.
-Some people react badly to smart people. Others take advantage of them.
+Some people react badly to smart people. Others take advantage of them.
-Make sure that you, as a kernel maintainer, are in the second group.
+Make sure that you, as a kernel maintainer, are in the second group.
Suck up to them, because they are the people who will make your job
easier. In particular, they'll be able to make your decisions for you,
which is what the game is all about.
@@ -191,7 +200,7 @@
management responsibilities largely become ones of saying "Sounds like a
good idea - go wild", or "That sounds good, but what about xxx?". The
second version in particular is a great way to either learn something
-new about "xxx" or seem _extra_ managerial by pointing out something the
+new about "xxx" or seem **extra** managerial by pointing out something the
smarter person hadn't thought about. In either case, you win.
One thing to look out for is to realize that greatness in one area does
@@ -199,47 +208,49 @@
specific directions, but let's face it, they might be good at what they
do, and suck at everything else. The good news is that people tend to
naturally gravitate back to what they are good at, so it's not like you
-are doing something irreversible when you _do_ prod them in some
+are doing something irreversible when you **do** prod them in some
direction, just don't push too hard.
- Chapter 4: Placing blame
+4) Placing blame
+----------------
Things will go wrong, and people want somebody to blame. Tag, you're it.
It's not actually that hard to accept the blame, especially if people
-kind of realize that it wasn't _all_ your fault. Which brings us to the
+kind of realize that it wasn't **all** your fault. Which brings us to the
best way of taking the blame: do it for another guy. You'll feel good
for taking the fall, he'll feel good about not getting blamed, and the
guy who lost his whole 36GB porn-collection because of your incompetence
will grudgingly admit that you at least didn't try to weasel out of it.
Then make the developer who really screwed up (if you can find him) know
-_in_private_ that he screwed up. Not just so he can avoid it in the
+**in_private** that he screwed up. Not just so he can avoid it in the
future, but so that he knows he owes you one. And, perhaps even more
importantly, he's also likely the person who can fix it. Because, let's
-face it, it sure ain't you.
+face it, it sure ain't you.
-Taking the blame is also why you get to be manager in the first place.
+Taking the blame is also why you get to be manager in the first place.
It's part of what makes people trust you, and allow you the potential
glory, because you're the one who gets to say "I screwed up". And if
you've followed the previous rules, you'll be pretty good at saying that
-by now.
+by now.
- Chapter 5: Things to avoid
+5) Things to avoid
+------------------
There's one thing people hate even more than being called "d*ckhead",
and that is being called a "d*ckhead" in a sanctimonious voice. The
first you can apologize for, the second one you won't really get the
chance. They likely will no longer be listening even if you otherwise
-do a good job.
+do a good job.
We all think we're better than anybody else, which means that when
-somebody else puts on airs, it _really_ rubs us the wrong way. You may
+somebody else puts on airs, it **really** rubs us the wrong way. You may
be morally and intellectually superior to everybody around you, but
-don't try to make it too obvious unless you really _intend_ to irritate
-somebody (*).
+don't try to make it too obvious unless you really **intend** to irritate
+somebody [#f3]_.
Similarly, don't be too polite or subtle about things. Politeness easily
ends up going overboard and hiding the problem, and as they say, "On the
@@ -251,15 +262,16 @@
overboard to the point of being ridiculous can drive a point home
without making it painful to the recipient, who just thinks you're being
silly. It can thus help get through the personal mental block we all
-have about criticism.
+have about criticism.
-(*) Hint: internet newsgroups that are not directly related to your work
-are great ways to take out your frustrations at other people. Write
-insulting posts with a sneer just to get into a good flame every once in
-a while, and you'll feel cleansed. Just don't crap too close to home.
+.. [#f3] Hint: internet newsgroups that are not directly related to your work
+ are great ways to take out your frustrations at other people. Write
+ insulting posts with a sneer just to get into a good flame every once in
+ a while, and you'll feel cleansed. Just don't crap too close to home.
- Chapter 6: Why me?
+6) Why me?
+----------
Since your main responsibility seems to be to take the blame for other
peoples mistakes, and make it painfully obvious to everybody else that
@@ -268,9 +280,9 @@
First off, while you may or may not get screaming teenage girls (or
boys, let's not be judgmental or sexist here) knocking on your dressing
-room door, you _will_ get an immense feeling of personal accomplishment
+room door, you **will** get an immense feeling of personal accomplishment
for being "in charge". Never mind the fact that you're really leading
by trying to keep up with everybody else and running after them as fast
-as you can. Everybody will still think you're the person in charge.
+as you can. Everybody will still think you're the person in charge.
It's a great job if you can hack it.
diff --git a/Documentation/PCI/pcieaer-howto.txt b/Documentation/PCI/pcieaer-howto.txt
index b4987c0..ea8cafb 100644
--- a/Documentation/PCI/pcieaer-howto.txt
+++ b/Documentation/PCI/pcieaer-howto.txt
@@ -49,25 +49,17 @@
CONFIG_PCIEAER = y.
2.2 Load PCI Express AER Root Driver
-There is a case where a system has AER support in BIOS. Enabling the AER
-Root driver and having AER support in BIOS may result unpredictable
-behavior. To avoid this conflict, a successful load of the AER Root driver
-requires ACPI _OSC support in the BIOS to allow the AER Root driver to
-request for native control of AER. See the PCI FW 3.0 Specification for
-details regarding OSC usage. Currently, lots of firmwares don't provide
-_OSC support while they use PCI Express. To support such firmwares,
-forceload, a parameter of type bool, could enable AER to continue to
-be initiated although firmwares have no _OSC support. To enable the
-walkaround, pls. add aerdriver.forceload=y to kernel boot parameter line
-when booting kernel. Note that forceload=n by default.
-nosourceid, another parameter of type bool, can be used when broken
-hardware (mostly chipsets) has root ports that cannot obtain the reporting
-source ID. nosourceid=n by default.
+Some systems have AER support in firmware. Enabling Linux AER support at
+the same time the firmware handles AER may result in unpredictable
+behavior. Therefore, Linux does not handle AER events unless the firmware
+grants AER control to the OS via the ACPI _OSC method. See the PCI FW 3.0
+Specification for details regarding _OSC usage.
2.3 AER error output
-When a PCI-E AER error is captured, an error message will be outputted to
-console. If it's a correctable error, it is outputted as a warning.
+
+When a PCIe AER error is captured, an error message will be output to
+console. If it's a correctable error, it is output as a warning.
Otherwise, it is printed as an error. So users could choose different
log level to filter out correctable error messages.
diff --git a/Documentation/RCU/Design/Requirements/Requirements.html b/Documentation/RCU/Design/Requirements/Requirements.html
index ece410f..a4d3838 100644
--- a/Documentation/RCU/Design/Requirements/Requirements.html
+++ b/Documentation/RCU/Design/Requirements/Requirements.html
@@ -2493,6 +2493,28 @@
variant of <tt>call_rcu()</tt> that might one day be created for
energy-efficiency purposes.
+<p>
+That said, there are limits.
+RCU requires that the <tt>rcu_head</tt> structure be aligned to a
+two-byte boundary, and passing a misaligned <tt>rcu_head</tt>
+structure to one of the <tt>call_rcu()</tt> family of functions
+will result in a splat.
+It is therefore necessary to exercise caution when packing
+structures containing fields of type <tt>rcu_head</tt>.
+Why not a four-byte or even eight-byte alignment requirement?
+Because the m68k architecture provides only two-byte alignment,
+and thus acts as alignment's least common denominator.
+
+<p>
+The reason for reserving the bottom bit of pointers to
+<tt>rcu_head</tt> structures is to leave the door open to
+“lazy” callbacks whose invocations can safely be deferred.
+Deferring invocation could potentially have energy-efficiency
+benefits, but only if the rate of non-lazy callbacks decreases
+significantly for some important workload.
+In the meantime, reserving the bottom bit keeps this option open
+in case it one day becomes useful.
+
<h3><a name="Performance, Scalability, Response Time, and Reliability">
Performance, Scalability, Response Time, and Reliability</a></h3>
diff --git a/Documentation/RCU/lockdep-splat.txt b/Documentation/RCU/lockdep-splat.txt
index bf90611..238e9f6 100644
--- a/Documentation/RCU/lockdep-splat.txt
+++ b/Documentation/RCU/lockdep-splat.txt
@@ -57,7 +57,7 @@
[<ffffffff817db154>] kernel_thread_helper+0x4/0x10
[<ffffffff81066430>] ? finish_task_switch+0x80/0x110
[<ffffffff817d9c04>] ? retint_restore_args+0xe/0xe
- [<ffffffff81097510>] ? __init_kthread_worker+0x70/0x70
+ [<ffffffff81097510>] ? __kthread_init_worker+0x70/0x70
[<ffffffff817db150>] ? gs_change+0xb/0xb
Line 2776 of block/cfq-iosched.c in v3.0-rc5 is as follows:
diff --git a/Documentation/RCU/torture.txt b/Documentation/RCU/torture.txt
index 118e7c1..278f6a9 100644
--- a/Documentation/RCU/torture.txt
+++ b/Documentation/RCU/torture.txt
@@ -10,21 +10,6 @@
command (perhaps grepping for "torture"). The test is started
when the module is loaded, and stops when the module is unloaded.
-CONFIG_RCU_TORTURE_TEST_RUNNABLE
-
-It is also possible to specify CONFIG_RCU_TORTURE_TEST=y, which will
-result in the tests being loaded into the base kernel. In this case,
-the CONFIG_RCU_TORTURE_TEST_RUNNABLE config option is used to specify
-whether the RCU torture tests are to be started immediately during
-boot or whether the /proc/sys/kernel/rcutorture_runnable file is used
-to enable them. This /proc file can be used to repeatedly pause and
-restart the tests, regardless of the initial state specified by the
-CONFIG_RCU_TORTURE_TEST_RUNNABLE config option.
-
-You will normally -not- want to start the RCU torture tests during boot
-(and thus the default is CONFIG_RCU_TORTURE_TEST_RUNNABLE=n), but doing
-this can sometimes be useful in finding boot-time bugs.
-
MODULE PARAMETERS
diff --git a/Documentation/SecurityBugs b/Documentation/SecurityBugs
index a660d49..342d769 100644
--- a/Documentation/SecurityBugs
+++ b/Documentation/SecurityBugs
@@ -1,9 +1,15 @@
+.. _securitybugs:
+
+Security bugs
+=============
+
Linux kernel developers take security very seriously. As such, we'd
like to know when a security bug is found so that it can be fixed and
disclosed as quickly as possible. Please report security bugs to the
Linux kernel security team.
1) Contact
+----------
The Linux kernel security team can be contacted by email at
<security@kernel.org>. This is a private list of security officers
@@ -18,6 +24,7 @@
consent from the reporter unless it has already been made public.
2) Disclosure
+-------------
The goal of the Linux kernel security team is to work with the
bug submitter to bug resolution as well as disclosure. We prefer
@@ -33,6 +40,7 @@
disclosure date to be on the order of 7 days.
3) Non-disclosure agreements
+----------------------------
The Linux kernel security team is not a formal body and therefore unable
to enter any non-disclosure agreements.
diff --git a/Documentation/SubmitChecklist b/Documentation/SubmitChecklist
index 2b7e32d..894289b 100644
--- a/Documentation/SubmitChecklist
+++ b/Documentation/SubmitChecklist
@@ -1,109 +1,120 @@
+.. _submitchecklist:
+
Linux Kernel patch submission checklist
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Here are some basic things that developers should do if they want to see their
kernel patch submissions accepted more quickly.
These are all above and beyond the documentation that is provided in
-Documentation/SubmittingPatches and elsewhere regarding submitting Linux
-kernel patches.
+:ref:`Documentation/SubmittingPatches <submittingpatches>`
+and elsewhere regarding submitting Linux kernel patches.
-1: If you use a facility then #include the file that defines/declares
+1) If you use a facility then #include the file that defines/declares
that facility. Don't depend on other header files pulling in ones
that you use.
-2: Builds cleanly with applicable or modified CONFIG options =y, =m, and
- =n. No gcc warnings/errors, no linker warnings/errors.
+2) Builds cleanly:
-2b: Passes allnoconfig, allmodconfig
+ a) with applicable or modified ``CONFIG`` options ``=y``, ``=m``, and
+ ``=n``. No ``gcc`` warnings/errors, no linker warnings/errors.
-2c: Builds successfully when using O=builddir
+ b) Passes ``allnoconfig``, ``allmodconfig``
-3: Builds on multiple CPU architectures by using local cross-compile tools
+ c) Builds successfully when using ``O=builddir``
+
+3) Builds on multiple CPU architectures by using local cross-compile tools
or some other build farm.
-4: ppc64 is a good architecture for cross-compilation checking because it
- tends to use `unsigned long' for 64-bit quantities.
+4) ppc64 is a good architecture for cross-compilation checking because it
+ tends to use ``unsigned long`` for 64-bit quantities.
-5: Check your patch for general style as detailed in
- Documentation/CodingStyle. Check for trivial violations with the
- patch style checker prior to submission (scripts/checkpatch.pl).
+5) Check your patch for general style as detailed in
+ :ref:`Documentation/CodingStyle <codingstyle>`.
+ Check for trivial violations with the patch style checker prior to
+ submission (``scripts/checkpatch.pl``).
You should be able to justify all violations that remain in
your patch.
-6: Any new or modified CONFIG options don't muck up the config menu.
+6) Any new or modified ``CONFIG`` options don't muck up the config menu.
-7: All new Kconfig options have help text.
+7) All new ``Kconfig`` options have help text.
-8: Has been carefully reviewed with respect to relevant Kconfig
+8) Has been carefully reviewed with respect to relevant ``Kconfig``
combinations. This is very hard to get right with testing -- brainpower
pays off here.
-9: Check cleanly with sparse.
+9) Check cleanly with sparse.
-10: Use 'make checkstack' and 'make namespacecheck' and fix any problems
- that they find. Note: checkstack does not point out problems explicitly,
- but any one function that uses more than 512 bytes on the stack is a
- candidate for change.
+10) Use ``make checkstack`` and ``make namespacecheck`` and fix any problems
+ that they find.
-11: Include kernel-doc to document global kernel APIs. (Not required for
- static functions, but OK there also.) Use 'make htmldocs' or 'make
- mandocs' to check the kernel-doc and fix any issues.
+ .. note::
-12: Has been tested with CONFIG_PREEMPT, CONFIG_DEBUG_PREEMPT,
- CONFIG_DEBUG_SLAB, CONFIG_DEBUG_PAGEALLOC, CONFIG_DEBUG_MUTEXES,
- CONFIG_DEBUG_SPINLOCK, CONFIG_DEBUG_ATOMIC_SLEEP, CONFIG_PROVE_RCU
- and CONFIG_DEBUG_OBJECTS_RCU_HEAD all simultaneously enabled.
+ ``checkstack`` does not point out problems explicitly,
+ but any one function that uses more than 512 bytes on the stack is a
+ candidate for change.
-13: Has been build- and runtime tested with and without CONFIG_SMP and
- CONFIG_PREEMPT.
+11) Include :ref:`kernel-doc <kernel_doc>` to document global kernel APIs.
+ (Not required for static functions, but OK there also.) Use
+ ``make htmldocs`` or ``make pdfdocs`` to check the
+ :ref:`kernel-doc <kernel_doc>` and fix any issues.
-14: If the patch affects IO/Disk, etc: has been tested with and without
- CONFIG_LBDAF.
+12) Has been tested with ``CONFIG_PREEMPT``, ``CONFIG_DEBUG_PREEMPT``,
+ ``CONFIG_DEBUG_SLAB``, ``CONFIG_DEBUG_PAGEALLOC``, ``CONFIG_DEBUG_MUTEXES``,
+ ``CONFIG_DEBUG_SPINLOCK``, ``CONFIG_DEBUG_ATOMIC_SLEEP``,
+ ``CONFIG_PROVE_RCU`` and ``CONFIG_DEBUG_OBJECTS_RCU_HEAD`` all
+ simultaneously enabled.
-15: All codepaths have been exercised with all lockdep features enabled.
+13) Has been build- and runtime tested with and without ``CONFIG_SMP`` and
+ ``CONFIG_PREEMPT.``
-16: All new /proc entries are documented under Documentation/
+14) If the patch affects IO/Disk, etc: has been tested with and without
+ ``CONFIG_LBDAF.``
-17: All new kernel boot parameters are documented in
- Documentation/kernel-parameters.txt.
+15) All codepaths have been exercised with all lockdep features enabled.
-18: All new module parameters are documented with MODULE_PARM_DESC()
+16) All new ``/proc`` entries are documented under ``Documentation/``
-19: All new userspace interfaces are documented in Documentation/ABI/.
- See Documentation/ABI/README for more information.
+17) All new kernel boot parameters are documented in
+ ``Documentation/kernel-parameters.txt``.
+
+18) All new module parameters are documented with ``MODULE_PARM_DESC()``
+
+19) All new userspace interfaces are documented in ``Documentation/ABI/``.
+ See ``Documentation/ABI/README`` for more information.
Patches that change userspace interfaces should be CCed to
linux-api@vger.kernel.org.
-20: Check that it all passes `make headers_check'.
+20) Check that it all passes ``make headers_check``.
-21: Has been checked with injection of at least slab and page-allocation
- failures. See Documentation/fault-injection/.
+21) Has been checked with injection of at least slab and page-allocation
+ failures. See ``Documentation/fault-injection/``.
If the new code is substantial, addition of subsystem-specific fault
injection might be appropriate.
-22: Newly-added code has been compiled with `gcc -W' (use "make
- EXTRA_CFLAGS=-W"). This will generate lots of noise, but is good for
- finding bugs like "warning: comparison between signed and unsigned".
+22) Newly-added code has been compiled with ``gcc -W`` (use
+ ``make EXTRA_CFLAGS=-W``). This will generate lots of noise, but is good
+ for finding bugs like "warning: comparison between signed and unsigned".
-23: Tested after it has been merged into the -mm patchset to make sure
+23) Tested after it has been merged into the -mm patchset to make sure
that it still works with all of the other queued patches and various
changes in the VM, VFS, and other subsystems.
-24: All memory barriers {e.g., barrier(), rmb(), wmb()} need a comment in the
- source code that explains the logic of what they are doing and why.
+24) All memory barriers {e.g., ``barrier()``, ``rmb()``, ``wmb()``} need a
+ comment in the source code that explains the logic of what they are doing
+ and why.
-25: If any ioctl's are added by the patch, then also update
- Documentation/ioctl/ioctl-number.txt.
+25) If any ioctl's are added by the patch, then also update
+ ``Documentation/ioctl/ioctl-number.txt``.
-26: If your modified source code depends on or uses any of the kernel
- APIs or features that are related to the following kconfig symbols,
- then test multiple builds with the related kconfig symbols disabled
- and/or =m (if that option is available) [not all of these at the
+26) If your modified source code depends on or uses any of the kernel
+ APIs or features that are related to the following ``Kconfig`` symbols,
+ then test multiple builds with the related ``Kconfig`` symbols disabled
+ and/or ``=m`` (if that option is available) [not all of these at the
same time, just various/random combinations of them]:
- CONFIG_SMP, CONFIG_SYSFS, CONFIG_PROC_FS, CONFIG_INPUT, CONFIG_PCI,
- CONFIG_BLOCK, CONFIG_PM, CONFIG_MAGIC_SYSRQ,
- CONFIG_NET, CONFIG_INET=n (but latter with CONFIG_NET=y)
+ ``CONFIG_SMP``, ``CONFIG_SYSFS``, ``CONFIG_PROC_FS``, ``CONFIG_INPUT``, ``CONFIG_PCI``, ``CONFIG_BLOCK``, ``CONFIG_PM``, ``CONFIG_MAGIC_SYSRQ``,
+ ``CONFIG_NET``, ``CONFIG_INET=n`` (but latter with ``CONFIG_NET=y``).
diff --git a/Documentation/SubmittingDrivers b/Documentation/SubmittingDrivers
index 31d3726..252b77a 100644
--- a/Documentation/SubmittingDrivers
+++ b/Documentation/SubmittingDrivers
@@ -1,5 +1,7 @@
+.. _submittingdrivers:
+
Submitting Drivers For The Linux Kernel
----------------------------------------
+=======================================
This document is intended to explain how to submit device drivers to the
various kernel trees. Note that if you are interested in video card drivers
@@ -38,42 +40,48 @@
maintainer does not respond or you cannot find the appropriate
maintainer then please contact Willy Tarreau <w@1wt.eu>.
-Linux 2.6:
+Linux 2.6 and upper:
The same rules apply as 2.4 except that you should follow linux-kernel
- to track changes in API's. The final contact point for Linux 2.6
+ to track changes in API's. The final contact point for Linux 2.6+
submissions is Andrew Morton.
What Criteria Determine Acceptance
----------------------------------
-Licensing: The code must be released to us under the
+Licensing:
+ The code must be released to us under the
GNU General Public License. We don't insist on any kind
of exclusive GPL licensing, and if you wish the driver
to be useful to other communities such as BSD you may well
wish to release under multiple licenses.
See accepted licenses at include/linux/module.h
-Copyright: The copyright owner must agree to use of GPL.
+Copyright:
+ The copyright owner must agree to use of GPL.
It's best if the submitter and copyright owner
are the same person/entity. If not, the name of
the person/entity authorizing use of GPL should be
listed in case it's necessary to verify the will of
the copyright owner.
-Interfaces: If your driver uses existing interfaces and behaves like
+Interfaces:
+ If your driver uses existing interfaces and behaves like
other drivers in the same class it will be much more likely
to be accepted than if it invents gratuitous new ones.
If you need to implement a common API over Linux and NT
drivers do it in userspace.
-Code: Please use the Linux style of code formatting as documented
- in Documentation/CodingStyle. If you have sections of code
+Code:
+ Please use the Linux style of code formatting as documented
+ in :ref:`Documentation/CodingStyle <codingStyle>`.
+ If you have sections of code
that need to be in other formats, for example because they
are shared with a windows driver kit and you want to
maintain them just once separate them out nicely and note
this fact.
-Portability: Pointers are not always 32bits, not all computers are little
+Portability:
+ Pointers are not always 32bits, not all computers are little
endian, people do not all have floating point and you
shouldn't use inline x86 assembler in your driver without
careful thought. Pure x86 drivers generally are not popular.
@@ -81,12 +89,14 @@
but it is easy to make sure the code can easily be made
portable.
-Clarity: It helps if anyone can see how to fix the driver. It helps
+Clarity:
+ It helps if anyone can see how to fix the driver. It helps
you because you get patches not bug reports. If you submit a
driver that intentionally obfuscates how the hardware works
it will go in the bitbucket.
-PM support: Since Linux is used on many portable and desktop systems, your
+PM support:
+ Since Linux is used on many portable and desktop systems, your
driver is likely to be used on such a system and therefore it
should support basic power management by implementing, if
necessary, the .suspend and .resume methods used during the
@@ -101,7 +111,8 @@
complete overview of the power management issues related to
drivers see Documentation/power/devices.txt .
-Control: In general if there is active maintenance of a driver by
+Control:
+ In general if there is active maintenance of a driver by
the author then patches will be redirected to them unless
they are totally obvious and without need of checking.
If you want to be the contact and update point for the
@@ -111,13 +122,15 @@
What Criteria Do Not Determine Acceptance
-----------------------------------------
-Vendor: Being the hardware vendor and maintaining the driver is
+Vendor:
+ Being the hardware vendor and maintaining the driver is
often a good thing. If there is a stable working driver from
other people already in the tree don't expect 'we are the
vendor' to get your driver chosen. Ideally work with the
existing driver author to build a single perfect driver.
-Author: It doesn't matter if a large Linux company wrote the driver,
+Author:
+ It doesn't matter if a large Linux company wrote the driver,
or you did. Nobody has any special access to the kernel
tree. Anyone who tells you otherwise isn't telling the
whole story.
@@ -127,8 +140,10 @@
---------
Linux kernel master tree:
- ftp.??.kernel.org:/pub/linux/kernel/...
- ?? == your country code, such as "us", "uk", "fr", etc.
+ ftp.\ *country_code*\ .kernel.org:/pub/linux/kernel/...
+
+ where *country_code* == your country code, such as
+ **us**, **uk**, **fr**, etc.
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git
@@ -141,14 +156,19 @@
LWN.net:
Weekly summary of kernel development activity - http://lwn.net/
+
2.6 API changes:
+
http://lwn.net/Articles/2.6-kernel-api/
+
Porting drivers from prior kernels to 2.6:
+
http://lwn.net/Articles/driver-porting/
KernelNewbies:
Documentation and assistance for new kernel programmers
- http://kernelnewbies.org/
+
+ http://kernelnewbies.org/
Linux USB project:
http://www.linux-usb.org/
diff --git a/Documentation/SubmittingPatches b/Documentation/SubmittingPatches
index 8c79f1d..36f1ded 100644
--- a/Documentation/SubmittingPatches
+++ b/Documentation/SubmittingPatches
@@ -1,9 +1,7 @@
+.. _submittingpatches:
- How to Get Your Change Into the Linux Kernel
- or
- Care And Operation Of Your Linus Torvalds
-
-
+How to Get Your Change Into the Linux Kernel or Care And Operation Of Your Linus Torvalds
+=========================================================================================
For a person or company who wishes to submit a change to the Linux
kernel, the process can sometimes be daunting if you're not familiar
@@ -12,57 +10,59 @@
This document contains a large number of suggestions in a relatively terse
format. For detailed information on how the kernel development process
-works, see Documentation/development-process. Also, read
-Documentation/SubmitChecklist for a list of items to check before
+works, see :ref:`Documentation/development-process <development_process_main>`.
+Also, read :ref:`Documentation/SubmitChecklist <submitchecklist>`
+for a list of items to check before
submitting code. If you are submitting a driver, also read
-Documentation/SubmittingDrivers; for device tree binding patches, read
+:ref:`Documentation/SubmittingDrivers <submittingdrivers>`;
+for device tree binding patches, read
Documentation/devicetree/bindings/submitting-patches.txt.
-Many of these steps describe the default behavior of the git version
-control system; if you use git to prepare your patches, you'll find much
+Many of these steps describe the default behavior of the ``git`` version
+control system; if you use ``git`` to prepare your patches, you'll find much
of the mechanical work done for you, though you'll still need to prepare
-and document a sensible set of patches. In general, use of git will make
+and document a sensible set of patches. In general, use of ``git`` will make
your life as a kernel developer easier.
---------------------------------------------
-SECTION 1 - CREATING AND SENDING YOUR CHANGE
---------------------------------------------
+Creating and Sending your Change
+********************************
0) Obtain a current source tree
-------------------------------
If you do not have a repository with the current kernel source handy, use
-git to obtain one. You'll want to start with the mainline repository,
-which can be grabbed with:
+``git`` to obtain one. You'll want to start with the mainline repository,
+which can be grabbed with::
- git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
+ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Note, however, that you may not want to develop against the mainline tree
directly. Most subsystem maintainers run their own trees and want to see
-patches prepared against those trees. See the "T:" entry for the subsystem
+patches prepared against those trees. See the **T:** entry for the subsystem
in the MAINTAINERS file to find that tree, or simply ask the maintainer if
the tree is not listed there.
It is still possible to download kernel releases via tarballs (as described
in the next section), but that is the hard way to do kernel development.
-1) "diff -up"
-------------
+1) ``diff -up``
+---------------
-If you must generate your patches by hand, use "diff -up" or "diff -uprN"
+If you must generate your patches by hand, use ``diff -up`` or ``diff -uprN``
to create patches. Git generates patches in this form by default; if
-you're using git, you can skip this section entirely.
+you're using ``git``, you can skip this section entirely.
All changes to the Linux kernel occur in the form of patches, as
-generated by diff(1). When creating your patch, make sure to create it
-in "unified diff" format, as supplied by the '-u' argument to diff(1).
-Also, please use the '-p' argument which shows which C function each
-change is in - that makes the resultant diff a lot easier to read.
+generated by :manpage:`diff(1)`. When creating your patch, make sure to
+create it in "unified diff" format, as supplied by the ``-u`` argument
+to :manpage:`diff(1)`.
+Also, please use the ``-p`` argument which shows which C function each
+change is in - that makes the resultant ``diff`` a lot easier to read.
Patches should be based in the root kernel source directory,
not in any lower subdirectory.
-To create a patch for a single file, it is often sufficient to do:
+To create a patch for a single file, it is often sufficient to do::
SRCTREE= linux
MYFILE= drivers/net/mydriver.c
@@ -74,8 +74,8 @@
diff -up $SRCTREE/$MYFILE{.orig,} > /tmp/patch
To create a patch for multiple files, you should unpack a "vanilla",
-or unmodified kernel source tree, and generate a diff against your
-own source tree. For example:
+or unmodified kernel source tree, and generate a ``diff`` against your
+own source tree. For example::
MYSRC= /devel/linux
@@ -84,27 +84,27 @@
diff -uprN -X linux-3.19-vanilla/Documentation/dontdiff \
linux-3.19-vanilla $MYSRC > /tmp/patch
-"dontdiff" is a list of files which are generated by the kernel during
-the build process, and should be ignored in any diff(1)-generated
+``dontdiff`` is a list of files which are generated by the kernel during
+the build process, and should be ignored in any :manpage:`diff(1)`-generated
patch.
Make sure your patch does not include any extra files which do not
belong in a patch submission. Make sure to review your patch -after-
-generating it with diff(1), to ensure accuracy.
+generating it with :manpage:`diff(1)`, to ensure accuracy.
If your changes produce a lot of deltas, you need to split them into
-individual patches which modify things in logical stages; see section
-#3. This will facilitate review by other kernel developers,
+individual patches which modify things in logical stages; see
+:ref:`split_changes`. This will facilitate review by other kernel developers,
very important if you want your patch accepted.
-If you're using git, "git rebase -i" can help you with this process. If
-you're not using git, quilt <http://savannah.nongnu.org/projects/quilt>
+If you're using ``git``, ``git rebase -i`` can help you with this process. If
+you're not using ``git``, ``quilt`` <http://savannah.nongnu.org/projects/quilt>
is another popular alternative.
+.. _describe_changes:
-
-2) Describe your changes.
--------------------------
+2) Describe your changes
+------------------------
Describe your problem. Whether your patch is a one-line bug fix or
5000 lines of a new feature, there must be an underlying problem that
@@ -137,11 +137,11 @@
The maintainer will thank you if you write your patch description in a
form which can be easily pulled into Linux's source code management
-system, git, as a "commit log". See #15, below.
+system, ``git``, as a "commit log". See :ref:`explicit_in_reply_to`.
Solve only one problem per patch. If your description starts to get
long, that's a sign that you probably need to split up your patch.
-See #3, next.
+See :ref:`split_changes`.
When you submit or resubmit a patch or patch series, include the
complete patch description and justification for it. Don't just
@@ -160,7 +160,7 @@
If the patch fixes a logged bug entry, refer to that bug entry by
number and URL. If the patch follows from a mailing list discussion,
give a URL to the mailing list archive; use the https://lkml.kernel.org/
-redirector with a Message-Id, to ensure that the links cannot become
+redirector with a ``Message-Id``, to ensure that the links cannot become
stale.
However, try to make your explanation understandable without external
@@ -171,7 +171,7 @@
If you want to refer to a specific commit, don't just refer to the
SHA-1 ID of the commit. Please also include the oneline summary of
the commit, to make it easier for reviewers to know what it is about.
-Example:
+Example::
Commit e21d2170f36602ae2708 ("video: remove unnecessary
platform_set_drvdata()") removed the unnecessary
@@ -185,23 +185,25 @@
change five years from now.
If your patch fixes a bug in a specific commit, e.g. you found an issue using
-git-bisect, please use the 'Fixes:' tag with the first 12 characters of the
-SHA-1 ID, and the one line summary. For example:
+``git bisect``, please use the 'Fixes:' tag with the first 12 characters of
+the SHA-1 ID, and the one line summary. For example::
Fixes: e21d2170f366 ("video: remove unnecessary platform_set_drvdata()")
-The following git-config settings can be used to add a pretty format for
-outputting the above style in the git log or git show commands
+The following ``git config`` settings can be used to add a pretty format for
+outputting the above style in the ``git log`` or ``git show`` commands::
[core]
abbrev = 12
[pretty]
fixes = Fixes: %h (\"%s\")
-3) Separate your changes.
--------------------------
+.. _split_changes:
-Separate each _logical change_ into a separate patch.
+3) Separate your changes
+------------------------
+
+Separate each **logical change** into a separate patch.
For example, if your changes include both bug fixes and performance
enhancements for a single driver, separate those changes into two
@@ -217,12 +219,12 @@
on its own merits.
If one patch depends on another patch in order for a change to be
-complete, that is OK. Simply note "this patch depends on patch X"
+complete, that is OK. Simply note **"this patch depends on patch X"**
in your patch description.
When dividing your change into a series of patches, take special care to
ensure that the kernel builds and runs properly after each patch in the
-series. Developers using "git bisect" to track down a problem can end up
+series. Developers using ``git bisect`` to track down a problem can end up
splitting your patch series at any point; they will not thank you if you
introduce bugs in the middle.
@@ -231,11 +233,13 @@
-4) Style-check your changes.
-----------------------------
+4) Style-check your changes
+---------------------------
Check your patch for basic style violations, details of which can be
-found in Documentation/CodingStyle. Failure to do so simply wastes
+found in
+:ref:`Documentation/CodingStyle <codingstyle>`.
+Failure to do so simply wastes
the reviewers time and will get your patch rejected, probably
without even being read.
@@ -260,8 +264,8 @@
patch.
-5) Select the recipients for your patch.
-----------------------------------------
+5) Select the recipients for your patch
+---------------------------------------
You should always copy the appropriate subsystem maintainer(s) on any patch
to code that they maintain; look through the MAINTAINERS file and the
@@ -295,13 +299,14 @@
obviously, the patch should not be sent to any public lists.
Patches that fix a severe bug in a released kernel should be directed
-toward the stable maintainers by putting a line like this:
+toward the stable maintainers by putting a line like this::
Cc: stable@vger.kernel.org
into the sign-off area of your patch (note, NOT an email recipient). You
-should also read Documentation/stable_kernel_rules.txt in addition to this
-file.
+should also read
+:ref:`Documentation/stable_kernel_rules.txt <stable_kernel_rules>`
+in addition to this file.
Note, however, that some subsystem maintainers want to come to their own
conclusions on which patches should go to the stable trees. The networking
@@ -312,28 +317,30 @@
maintainer (as listed in the MAINTAINERS file) a man-pages patch, or at
least a notification of the change, so that some information makes its way
into the manual pages. User-space API changes should also be copied to
-linux-api@vger.kernel.org.
+linux-api@vger.kernel.org.
For small patches you may want to CC the Trivial Patch Monkey
trivial@kernel.org which collects "trivial" patches. Have a look
into the MAINTAINERS file for its current manager.
+
Trivial patches must qualify for one of the following rules:
- Spelling fixes in documentation
- Spelling fixes for errors which could break grep(1)
- Warning fixes (cluttering with useless warnings is bad)
- Compilation fixes (only if they are actually correct)
- Runtime fixes (only if they actually fix things)
- Removing use of deprecated functions/macros
- Contact detail and documentation fixes
- Non-portable code replaced by portable code (even in arch-specific,
- since people copy, as long as it's trivial)
- Any fix by the author/maintainer of the file (ie. patch monkey
- in re-transmission mode)
+
+- Spelling fixes in documentation
+- Spelling fixes for errors which could break :manpage:`grep(1)`
+- Warning fixes (cluttering with useless warnings is bad)
+- Compilation fixes (only if they are actually correct)
+- Runtime fixes (only if they actually fix things)
+- Removing use of deprecated functions/macros
+- Contact detail and documentation fixes
+- Non-portable code replaced by portable code (even in arch-specific,
+ since people copy, as long as it's trivial)
+- Any fix by the author/maintainer of the file (ie. patch monkey
+ in re-transmission mode)
-6) No MIME, no links, no compression, no attachments. Just plain text.
------------------------------------------------------------------------
+6) No MIME, no links, no compression, no attachments. Just plain text
+----------------------------------------------------------------------
Linus and other kernel developers need to be able to read and comment
on the changes you are submitting. It is important for a kernel
@@ -341,8 +348,11 @@
tools, so that they may comment on specific portions of your code.
For this reason, all patches should be submitted by e-mail "inline".
-WARNING: Be wary of your editor's word-wrap corrupting your patch,
-if you choose to cut-n-paste your patch.
+
+.. warning::
+
+ Be wary of your editor's word-wrap corrupting your patch,
+ if you choose to cut-n-paste your patch.
Do not attach the patch as a MIME attachment, compressed or not.
Many popular e-mail applications will not always transmit a MIME
@@ -353,11 +363,12 @@
Exception: If your mailer is mangling patches then someone may ask
you to re-send them using MIME.
-See Documentation/email-clients.txt for hints about configuring
-your e-mail client so that it sends your patches untouched.
+See :ref:`Documentation/email-clients.txt <email_clients>`
+for hints about configuring your e-mail client so that it sends your patches
+untouched.
-7) E-mail size.
----------------
+7) E-mail size
+--------------
Large changes are not appropriate for mailing lists, and some
maintainers. If your patch, uncompressed, exceeds 300 kB in size,
@@ -366,8 +377,8 @@
that if your patch exceeds 300 kB, it almost certainly needs to be broken up
anyway.
-8) Respond to review comments.
-------------------------------
+8) Respond to review comments
+-----------------------------
Your patch will almost certainly get comments from reviewers on ways in
which the patch can be improved. You must respond to those comments;
@@ -382,8 +393,8 @@
politely and address the problems they have pointed out.
-9) Don't get discouraged - or impatient.
-----------------------------------------
+9) Don't get discouraged - or impatient
+---------------------------------------
After you have submitted your change, be patient and wait. Reviewers are
busy people and may not get to your patch right away.
@@ -419,9 +430,10 @@
pass it on as an open-source patch. The rules are pretty simple: if you
can certify the below:
- Developer's Certificate of Origin 1.1
+Developer's Certificate of Origin 1.1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- By making a contribution to this project, I certify that:
+By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
@@ -445,7 +457,7 @@
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
-then you just add a line saying
+then you just add a line saying::
Signed-off-by: Random J Developer <random@developer.example.org>
@@ -466,7 +478,7 @@
the nature of your changes. While there is nothing mandatory about this, it
seems like prepending the description with your mail and/or name, all
enclosed in square brackets, is noticeable enough to make it obvious that
-you are responsible for last-minute changes. Example :
+you are responsible for last-minute changes. Example::
Signed-off-by: Random J Developer <random@developer.example.org>
[lucky@maintainer.example.org: struct foo moved from foo.c to foo.h]
@@ -481,15 +493,15 @@
Special note to back-porters: It seems to be a common and useful practice
to insert an indication of the origin of a patch at the top of the commit
message (just after the subject line) to facilitate tracking. For instance,
-here's what we see in a 3.x-stable release:
+here's what we see in a 3.x-stable release::
-Date: Tue Oct 7 07:26:38 2014 -0400
+ Date: Tue Oct 7 07:26:38 2014 -0400
libata: Un-break ATA blacklist
commit 1c40279960bcd7d52dbdf1d466b20d24b99176c8 upstream.
-And here's what might appear in an older kernel once a patch is backported:
+And here's what might appear in an older kernel once a patch is backported::
Date: Tue May 13 22:12:27 2008 +0200
@@ -529,7 +541,7 @@
list archives.
If a person has had the opportunity to comment on a patch, but has not
-provided such comments, you may optionally add a "Cc:" tag to the patch.
+provided such comments, you may optionally add a ``Cc:`` tag to the patch.
This is the only tag which might be added without an explicit action by the
person it names - but it should indicate that this person was copied on the
patch. This tag documents that potentially interested parties
@@ -552,11 +564,12 @@
Reviewed-by:, instead, indicates that the patch has been reviewed and found
acceptable according to the Reviewer's Statement:
- Reviewer's statement of oversight
+Reviewer's statement of oversight
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- By offering my Reviewed-by: tag, I state that:
+By offering my Reviewed-by: tag, I state that:
- (a) I have carried out a technical review of this patch to
+ (a) I have carried out a technical review of this patch to
evaluate its appropriateness and readiness for inclusion into
the mainline kernel.
@@ -594,24 +607,25 @@
is used to make it easy to determine where a bug originated, which can help
review a bug fix. This tag also assists the stable kernel team in determining
which stable kernel versions should receive your fix. This is the preferred
-method for indicating a bug fixed by the patch. See #2 above for more details.
+method for indicating a bug fixed by the patch. See :ref:`describe_changes`
+for more details.
14) The canonical patch format
------------------------------
This section describes how the patch itself should be formatted. Note
-that, if you have your patches stored in a git repository, proper patch
-formatting can be had with "git format-patch". The tools cannot create
+that, if you have your patches stored in a ``git`` repository, proper patch
+formatting can be had with ``git format-patch``. The tools cannot create
the necessary text, though, so read the instructions below anyway.
-The canonical patch subject line is:
+The canonical patch subject line is::
Subject: [PATCH 001/123] subsystem: summary phrase
The canonical patch message body contains the following:
- - A "from" line specifying the patch author (only needed if the person
+ - A ``from`` line specifying the patch author (only needed if the person
sending the patch is not the author).
- An empty line.
@@ -619,46 +633,46 @@
- The body of the explanation, line wrapped at 75 columns, which will
be copied to the permanent changelog to describe this patch.
- - The "Signed-off-by:" lines, described above, which will
+ - The ``Signed-off-by:`` lines, described above, which will
also go in the changelog.
- - A marker line containing simply "---".
+ - A marker line containing simply ``---``.
- Any additional comments not suitable for the changelog.
- - The actual patch (diff output).
+ - The actual patch (``diff`` output).
The Subject line format makes it very easy to sort the emails
alphabetically by subject line - pretty much any email reader will
support that - since because the sequence number is zero-padded,
the numerical and alphabetic sort is the same.
-The "subsystem" in the email's Subject should identify which
+The ``subsystem`` in the email's Subject should identify which
area or subsystem of the kernel is being patched.
-The "summary phrase" in the email's Subject should concisely
-describe the patch which that email contains. The "summary
-phrase" should not be a filename. Do not use the same "summary
-phrase" for every patch in a whole patch series (where a "patch
-series" is an ordered sequence of multiple, related patches).
+The ``summary phrase`` in the email's Subject should concisely
+describe the patch which that email contains. The ``summary
+phrase`` should not be a filename. Do not use the same ``summary
+phrase`` for every patch in a whole patch series (where a ``patch
+series`` is an ordered sequence of multiple, related patches).
-Bear in mind that the "summary phrase" of your email becomes a
+Bear in mind that the ``summary phrase`` of your email becomes a
globally-unique identifier for that patch. It propagates all the way
-into the git changelog. The "summary phrase" may later be used in
+into the ``git`` changelog. The ``summary phrase`` may later be used in
developer discussions which refer to the patch. People will want to
-google for the "summary phrase" to read discussion regarding that
+google for the ``summary phrase`` to read discussion regarding that
patch. It will also be the only thing that people may quickly see
when, two or three months later, they are going through perhaps
-thousands of patches using tools such as "gitk" or "git log
---oneline".
+thousands of patches using tools such as ``gitk`` or ``git log
+--oneline``.
-For these reasons, the "summary" must be no more than 70-75
+For these reasons, the ``summary`` must be no more than 70-75
characters, and it must describe both what the patch changes, as well
as why the patch might be necessary. It is challenging to be both
succinct and descriptive, but that is what a well-written summary
should do.
-The "summary phrase" may be prefixed by tags enclosed in square
+The ``summary phrase`` may be prefixed by tags enclosed in square
brackets: "Subject: [PATCH <tag>...] <summary phrase>". The tags are
not considered part of the summary phrase, but describe how the patch
should be treated. Common tags might include a version descriptor if
@@ -670,19 +684,19 @@
applied and that they have reviewed or applied all of the patches in
the patch series.
-A couple of example Subjects:
+A couple of example Subjects::
Subject: [PATCH 2/5] ext2: improve scalability of bitmap searching
Subject: [PATCH v2 01/27] x86: fix eflags tracking
-The "from" line must be the very first line in the message body,
+The ``from`` line must be the very first line in the message body,
and has the form:
From: Original Author <author@example.com>
-The "from" line specifies who will be credited as the author of the
-patch in the permanent changelog. If the "from" line is missing,
-then the "From:" line from the email header will be used to determine
+The ``from`` line specifies who will be credited as the author of the
+patch in the permanent changelog. If the ``from`` line is missing,
+then the ``From:`` line from the email header will be used to determine
the patch author in the changelog.
The explanation body will be committed to the permanent source
@@ -694,35 +708,37 @@
looking for the applicable patch. If a patch fixes a compile failure,
it may not be necessary to include _all_ of the compile failures; just
enough that it is likely that someone searching for the patch can find
-it. As in the "summary phrase", it is important to be both succinct as
+it. As in the ``summary phrase``, it is important to be both succinct as
well as descriptive.
-The "---" marker line serves the essential purpose of marking for patch
+The ``---`` marker line serves the essential purpose of marking for patch
handling tools where the changelog message ends.
-One good use for the additional comments after the "---" marker is for
-a diffstat, to show what files have changed, and the number of
-inserted and deleted lines per file. A diffstat is especially useful
+One good use for the additional comments after the ``---`` marker is for
+a ``diffstat``, to show what files have changed, and the number of
+inserted and deleted lines per file. A ``diffstat`` is especially useful
on bigger patches. Other comments relevant only to the moment or the
maintainer, not suitable for the permanent changelog, should also go
-here. A good example of such comments might be "patch changelogs"
+here. A good example of such comments might be ``patch changelogs``
which describe what has changed between the v1 and v2 version of the
patch.
-If you are going to include a diffstat after the "---" marker, please
-use diffstat options "-p 1 -w 70" so that filenames are listed from
+If you are going to include a ``diffstat`` after the ``---`` marker, please
+use ``diffstat`` options ``-p 1 -w 70`` so that filenames are listed from
the top of the kernel source tree and don't use too much horizontal
-space (easily fit in 80 columns, maybe with some indentation). (git
+space (easily fit in 80 columns, maybe with some indentation). (``git``
generates appropriate diffstats by default.)
See more details on the proper patch format in the following
references.
+.. _explicit_in_reply_to:
+
15) Explicit In-Reply-To headers
--------------------------------
It can be helpful to manually add In-Reply-To: headers to a patch
-(e.g., when using "git send-email") to associate the patch with
+(e.g., when using ``git send-email``) to associate the patch with
previous relevant discussion, e.g. to link a bug fix to the email with
the bug report. However, for a multi-patch series, it is generally
best to avoid using In-Reply-To: to link to older versions of the
@@ -732,12 +748,12 @@
the cover email text) to link to an earlier version of the patch series.
-16) Sending "git pull" requests
--------------------------------
+16) Sending ``git pull`` requests
+---------------------------------
If you have a series of patches, it may be most convenient to have the
maintainer pull them directly into the subsystem repository with a
-"git pull" operation. Note, however, that pulling patches from a developer
+``git pull`` operation. Note, however, that pulling patches from a developer
requires a higher degree of trust than taking patches from a mailing list.
As a result, many subsystem maintainers are reluctant to take pull
requests, especially from new, unknown developers. If in doubt you can use
@@ -746,7 +762,7 @@
A pull request should have [GIT] or [PULL] in the subject line. The
request itself should include the repository name and the branch of
-interest on a single line; it should look something like:
+interest on a single line; it should look something like::
Please pull from
@@ -755,10 +771,10 @@
to get these changes:
A pull request should also include an overall message saying what will be
-included in the request, a "git shortlog" listing of the patches
-themselves, and a diffstat showing the overall effect of the patch series.
+included in the request, a ``git shortlog`` listing of the patches
+themselves, and a ``diffstat`` showing the overall effect of the patch series.
The easiest way to get all this information together is, of course, to let
-git do it for you with the "git request-pull" command.
+``git`` do it for you with the ``git request-pull`` command.
Some maintainers (including Linus) want to see pull requests from signed
commits; that increases their confidence that the request actually came
@@ -770,8 +786,8 @@
new developers, but there is no way around it. Attending conferences can
be a good way to find developers who can sign your key.
-Once you have prepared a patch series in git that you wish to have somebody
-pull, create a signed tag with "git tag -s". This will create a new tag
+Once you have prepared a patch series in ``git`` that you wish to have somebody
+pull, create a signed tag with ``git tag -s``. This will create a new tag
identifying the last commit in the series and containing a signature
created with your private key. You will also have the opportunity to add a
changelog-style message to the tag; this is an ideal place to describe the
@@ -782,14 +798,13 @@
public tree.
When generating your pull request, use the signed tag as the target. A
-command like this will do the trick:
+command like this will do the trick::
git request-pull master git://my.public.tree/linux.git my-signed-tag
-----------------------
-SECTION 2 - REFERENCES
-----------------------
+REFERENCES
+**********
Andrew Morton, "The perfect patch" (tpp).
<http://www.ozlabs.org/~akpm/stuff/tpp.txt>
@@ -799,23 +814,28 @@
Greg Kroah-Hartman, "How to piss off a kernel subsystem maintainer".
<http://www.kroah.com/log/linux/maintainer.html>
+
<http://www.kroah.com/log/linux/maintainer-02.html>
+
<http://www.kroah.com/log/linux/maintainer-03.html>
+
<http://www.kroah.com/log/linux/maintainer-04.html>
+
<http://www.kroah.com/log/linux/maintainer-05.html>
+
<http://www.kroah.com/log/linux/maintainer-06.html>
NO!!!! No more huge patch bombs to linux-kernel@vger.kernel.org people!
<https://lkml.org/lkml/2005/7/11/336>
Kernel Documentation/CodingStyle:
- <Documentation/CodingStyle>
+ :ref:`Documentation/CodingStyle <codingstyle>`
Linus Torvalds's mail on the canonical patch format:
<http://lkml.org/lkml/2005/4/7/183>
Andi Kleen, "On submitting kernel patches"
Some strategies to get difficult or controversial changes in.
+
http://halobates.de/on-submitting-patches.pdf
---
diff --git a/Documentation/accounting/Makefile b/Documentation/accounting/Makefile
deleted file mode 100644
index 7e232cb..0000000
--- a/Documentation/accounting/Makefile
+++ /dev/null
@@ -1,7 +0,0 @@
-# List of programs to build
-hostprogs-y := getdelays
-
-# Tell kbuild to always build the programs
-always := $(hostprogs-y)
-
-HOSTCFLAGS_getdelays.o += -I$(objtree)/usr/include
diff --git a/Documentation/accounting/delay-accounting.txt b/Documentation/accounting/delay-accounting.txt
index 8a12f07..042ea59 100644
--- a/Documentation/accounting/delay-accounting.txt
+++ b/Documentation/accounting/delay-accounting.txt
@@ -54,9 +54,9 @@
task of a thread group, the per-tgid statistics are also sent. More details
are given in the taskstats interface description.
-The getdelays.c userspace utility in this directory allows simple commands to
-be run and the corresponding delay statistics to be displayed. It also serves
-as an example of using the taskstats interface.
+The getdelays.c userspace utility in tools/accounting directory allows simple
+commands to be run and the corresponding delay statistics to be displayed. It
+also serves as an example of using the taskstats interface.
Usage
-----
diff --git a/Documentation/acpi/acpi-lid.txt b/Documentation/acpi/acpi-lid.txt
new file mode 100644
index 0000000..effe7af
--- /dev/null
+++ b/Documentation/acpi/acpi-lid.txt
@@ -0,0 +1,96 @@
+Special Usage Model of the ACPI Control Method Lid Device
+
+Copyright (C) 2016, Intel Corporation
+Author: Lv Zheng <lv.zheng@intel.com>
+
+
+Abstract:
+
+Platforms containing lids convey lid state (open/close) to OSPMs using a
+control method lid device. To implement this, the AML tables issue
+Notify(lid_device, 0x80) to notify the OSPMs whenever the lid state has
+changed. The _LID control method for the lid device must be implemented to
+report the "current" state of the lid as either "opened" or "closed".
+
+For most platforms, both the _LID method and the lid notifications are
+reliable. However, there are exceptions. In order to work with these
+exceptional buggy platforms, special restrictions and expections should be
+taken into account. This document describes the restrictions and the
+expections of the Linux ACPI lid device driver.
+
+
+1. Restrictions of the returning value of the _LID control method
+
+The _LID control method is described to return the "current" lid state.
+However the word of "current" has ambiguity, some buggy AML tables return
+the lid state upon the last lid notification instead of returning the lid
+state upon the last _LID evaluation. There won't be difference when the
+_LID control method is evaluated during the runtime, the problem is its
+initial returning value. When the AML tables implement this control method
+with cached value, the initial returning value is likely not reliable.
+There are platforms always retun "closed" as initial lid state.
+
+2. Restrictions of the lid state change notifications
+
+There are buggy AML tables never notifying when the lid device state is
+changed to "opened". Thus the "opened" notification is not guaranteed. But
+it is guaranteed that the AML tables always notify "closed" when the lid
+state is changed to "closed". The "closed" notification is normally used to
+trigger some system power saving operations on Windows. Since it is fully
+tested, it is reliable from all AML tables.
+
+3. Expections for the userspace users of the ACPI lid device driver
+
+The ACPI button driver exports the lid state to the userspace via the
+following file:
+ /proc/acpi/button/lid/LID0/state
+This file actually calls the _LID control method described above. And given
+the previous explanation, it is not reliable enough on some platforms. So
+it is advised for the userspace program to not to solely rely on this file
+to determine the actual lid state.
+
+The ACPI button driver emits the following input event to the userspace:
+ SW_LID
+The ACPI lid device driver is implemented to try to deliver the platform
+triggered events to the userspace. However, given the fact that the buggy
+firmware cannot make sure "opened"/"closed" events are paired, the ACPI
+button driver uses the following 3 modes in order not to trigger issues.
+
+If the userspace hasn't been prepared to ignore the unreliable "opened"
+events and the unreliable initial state notification, Linux users can use
+the following kernel parameters to handle the possible issues:
+A. button.lid_init_state=method:
+ When this option is specified, the ACPI button driver reports the
+ initial lid state using the returning value of the _LID control method
+ and whether the "opened"/"closed" events are paired fully relies on the
+ firmware implementation.
+ This option can be used to fix some platforms where the returning value
+ of the _LID control method is reliable but the initial lid state
+ notification is missing.
+ This option is the default behavior during the period the userspace
+ isn't ready to handle the buggy AML tables.
+B. button.lid_init_state=open:
+ When this option is specified, the ACPI button driver always reports the
+ initial lid state as "opened" and whether the "opened"/"closed" events
+ are paired fully relies on the firmware implementation.
+ This may fix some platforms where the returning value of the _LID
+ control method is not reliable and the initial lid state notification is
+ missing.
+
+If the userspace has been prepared to ignore the unreliable "opened" events
+and the unreliable initial state notification, Linux users should always
+use the following kernel parameter:
+C. button.lid_init_state=ignore:
+ When this option is specified, the ACPI button driver never reports the
+ initial lid state and there is a compensation mechanism implemented to
+ ensure that the reliable "closed" notifications can always be delievered
+ to the userspace by always pairing "closed" input events with complement
+ "opened" input events. But there is still no guarantee that the "opened"
+ notifications can be delivered to the userspace when the lid is actually
+ opens given that some AML tables do not send "opened" notifications
+ reliably.
+ In this mode, if everything is correctly implemented by the platform
+ firmware, the old userspace programs should still work. Otherwise, the
+ new userspace programs are required to work with the ACPI button driver.
+ This option will be the default behavior after the userspace is ready to
+ handle the buggy AML tables.
diff --git a/Documentation/acpi/gpio-properties.txt b/Documentation/acpi/gpio-properties.txt
index f35dad1..5aafe0b3 100644
--- a/Documentation/acpi/gpio-properties.txt
+++ b/Documentation/acpi/gpio-properties.txt
@@ -28,8 +28,8 @@
ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
Package ()
{
- Package () {"reset-gpio", Package() {^BTH, 1, 1, 0 }},
- Package () {"shutdown-gpio", Package() {^BTH, 0, 0, 0 }},
+ Package () {"reset-gpios", Package() {^BTH, 1, 1, 0 }},
+ Package () {"shutdown-gpios", Package() {^BTH, 0, 0, 0 }},
}
})
}
@@ -48,7 +48,7 @@
active low or high, the "active_low" argument can be used here. Setting
it to 1 marks the GPIO as active low.
-In our Bluetooth example the "reset-gpio" refers to the second GpioIo()
+In our Bluetooth example the "reset-gpios" refers to the second GpioIo()
resource, second pin in that resource with the GPIO number of 31.
ACPI GPIO Mappings Provided by Drivers
@@ -83,8 +83,8 @@
static const struct acpi_gpio_params shutdown_gpio = { 0, 0, false };
static const struct acpi_gpio_mapping bluetooth_acpi_gpios[] = {
- { "reset-gpio", &reset_gpio, 1 },
- { "shutdown-gpio", &shutdown_gpio, 1 },
+ { "reset-gpios", &reset_gpio, 1 },
+ { "shutdown-gpios", &shutdown_gpio, 1 },
{ },
};
diff --git a/Documentation/applying-patches.txt b/Documentation/applying-patches.txt
index 77df55b..02ce492 100644
--- a/Documentation/applying-patches.txt
+++ b/Documentation/applying-patches.txt
@@ -1,9 +1,13 @@
+.. _applying_patches:
- Applying Patches To The Linux Kernel
- ------------------------------------
+Applying Patches To The Linux Kernel
+++++++++++++++++++++++++++++++++++++
- Original by: Jesper Juhl, August 2005
- Last update: 2006-01-05
+Original by:
+ Jesper Juhl, August 2005
+
+Last update:
+ 2016-09-14
A frequently asked question on the Linux Kernel Mailing List is how to apply
@@ -17,10 +21,12 @@
What is a patch?
----
- A patch is a small text document containing a delta of changes between two
-different versions of a source tree. Patches are created with the `diff'
+================
+
+A patch is a small text document containing a delta of changes between two
+different versions of a source tree. Patches are created with the ``diff``
program.
+
To correctly apply a patch you need to know what base it was generated from
and what new version the patch will change the source tree into. These
should both be present in the patch file metadata or be possible to deduce
@@ -28,8 +34,9 @@
How do I apply or revert a patch?
----
- You apply a patch with the `patch' program. The patch program reads a diff
+=================================
+
+You apply a patch with the ``patch`` program. The patch program reads a diff
(or patch) file and makes the changes to the source tree described in it.
Patches for the Linux kernel are generated relative to the parent directory
@@ -38,26 +45,33 @@
This means that paths to files inside the patch file contain the name of the
kernel source directories it was generated against (or some other directory
names like "a/" and "b/").
+
Since this is unlikely to match the name of the kernel source dir on your
local machine (but is often useful info to see what version an otherwise
unlabeled patch was generated against) you should change into your kernel
source directory and then strip the first element of the path from filenames
-in the patch file when applying it (the -p1 argument to `patch' does this).
+in the patch file when applying it (the ``-p1`` argument to ``patch`` does
+this).
To revert a previously applied patch, use the -R argument to patch.
-So, if you applied a patch like this:
+So, if you applied a patch like this::
+
patch -p1 < ../patch-x.y.z
-You can revert (undo) it like this:
+You can revert (undo) it like this::
+
patch -R -p1 < ../patch-x.y.z
-How do I feed a patch/diff file to `patch'?
----
- This (as usual with Linux and other UNIX like operating systems) can be
+How do I feed a patch/diff file to ``patch``?
+=============================================
+
+This (as usual with Linux and other UNIX like operating systems) can be
done in several different ways.
+
In all the examples below I feed the file (in uncompressed form) to patch
-via stdin using the following syntax:
+via stdin using the following syntax::
+
patch -p1 < path/to/patch-x.y.z
If you just want to be able to follow the examples below and don't want to
@@ -65,35 +79,40 @@
section here.
Patch can also get the name of the file to use via the -i argument, like
-this:
+this::
+
patch -p1 -i path/to/patch-x.y.z
-If your patch file is compressed with gzip or bzip2 and you don't want to
+If your patch file is compressed with gzip or xz and you don't want to
uncompress it before applying it, then you can feed it to patch like this
-instead:
- zcat path/to/patch-x.y.z.gz | patch -p1
- bzcat path/to/patch-x.y.z.bz2 | patch -p1
+instead::
+
+ xzcat path/to/patch-x.y.z.xz | patch -p1
+ bzcat path/to/patch-x.y.z.gz | patch -p1
If you wish to uncompress the patch file by hand first before applying it
(what I assume you've done in the examples below), then you simply run
-gunzip or bunzip2 on the file -- like this:
+gunzip or xz on the file -- like this::
+
gunzip patch-x.y.z.gz
- bunzip2 patch-x.y.z.bz2
+ xz -d patch-x.y.z.xz
Which will leave you with a plain text patch-x.y.z file that you can feed to
-patch via stdin or the -i argument, as you prefer.
+patch via stdin or the ``-i`` argument, as you prefer.
-A few other nice arguments for patch are -s which causes patch to be silent
+A few other nice arguments for patch are ``-s`` which causes patch to be silent
except for errors which is nice to prevent errors from scrolling out of the
-screen too fast, and --dry-run which causes patch to just print a listing of
-what would happen, but doesn't actually make any changes. Finally --verbose
+screen too fast, and ``--dry-run`` which causes patch to just print a listing of
+what would happen, but doesn't actually make any changes. Finally ``--verbose``
tells patch to print more information about the work being done.
Common errors when patching
----
- When patch applies a patch file it attempts to verify the sanity of the
+===========================
+
+When patch applies a patch file it attempts to verify the sanity of the
file in different ways.
+
Checking that the file looks like a valid patch file and checking the code
around the bits being modified matches the context provided in the patch are
just two of the basic sanity checks patch does.
@@ -111,13 +130,13 @@
usually adjust the line numbers and apply the patch.
Whenever patch applies a patch that it had to modify a bit to make it fit
-it'll tell you about it by saying the patch applied with 'fuzz'.
+it'll tell you about it by saying the patch applied with **fuzz**.
You should be wary of such changes since even though patch probably got it
right it doesn't /always/ get it right, and the result will sometimes be
wrong.
When patch encounters a change that it can't fix up with fuzz it rejects it
-outright and leaves a file with a .rej extension (a reject file). You can
+outright and leaves a file with a ``.rej`` extension (a reject file). You can
read this file to see exactly what change couldn't be applied, so you can
go fix it up by hand if you wish.
@@ -132,43 +151,47 @@
Let's look a bit more at some of the messages patch can produce.
-If patch stops and presents a "File to patch:" prompt, then patch could not
+If patch stops and presents a ``File to patch:`` prompt, then patch could not
find a file to be patched. Most likely you forgot to specify -p1 or you are
in the wrong directory. Less often, you'll find patches that need to be
-applied with -p0 instead of -p1 (reading the patch file should reveal if
+applied with ``-p0`` instead of ``-p1`` (reading the patch file should reveal if
this is the case -- if so, then this is an error by the person who created
the patch but is not fatal).
-If you get "Hunk #2 succeeded at 1887 with fuzz 2 (offset 7 lines)." or a
+If you get ``Hunk #2 succeeded at 1887 with fuzz 2 (offset 7 lines).`` or a
message similar to that, then it means that patch had to adjust the location
of the change (in this example it needed to move 7 lines from where it
expected to make the change to make it fit).
+
The resulting file may or may not be OK, depending on the reason the file
was different than expected.
+
This often happens if you try to apply a patch that was generated against a
different kernel version than the one you are trying to patch.
-If you get a message like "Hunk #3 FAILED at 2387.", then it means that the
+If you get a message like ``Hunk #3 FAILED at 2387.``, then it means that the
patch could not be applied correctly and the patch program was unable to
-fuzz its way through. This will generate a .rej file with the change that
-caused the patch to fail and also a .orig file showing you the original
+fuzz its way through. This will generate a ``.rej`` file with the change that
+caused the patch to fail and also a ``.orig`` file showing you the original
content that couldn't be changed.
-If you get "Reversed (or previously applied) patch detected! Assume -R? [n]"
+If you get ``Reversed (or previously applied) patch detected! Assume -R? [n]``
then patch detected that the change contained in the patch seems to have
already been made.
+
If you actually did apply this patch previously and you just re-applied it
in error, then just say [n]o and abort this patch. If you applied this patch
previously and actually intended to revert it, but forgot to specify -R,
-then you can say [y]es here to make patch revert it for you.
+then you can say [**y**]es here to make patch revert it for you.
+
This can also happen if the creator of the patch reversed the source and
destination directories when creating the patch, and in that case reverting
the patch will in fact apply it.
-A message similar to "patch: **** unexpected end of file in patch" or "patch
-unexpectedly ends in middle of line" means that patch could make no sense of
-the file you fed to it. Either your download is broken, you tried to feed
-patch a compressed patch file without uncompressing it first, or the patch
+A message similar to ``patch: **** unexpected end of file in patch`` or
+``patch unexpectedly ends in middle of line`` means that patch could make no
+sense of the file you fed to it. Either your download is broken, you tried to
+feed patch a compressed patch file without uncompressing it first, or the patch
file that you are using has been mangled by a mail client or mail transfer
agent along the way somewhere, e.g., by splitting a long line into two lines.
Often these warnings can easily be fixed by joining (concatenating) the
@@ -182,28 +205,32 @@
wish to apply.
-Are there any alternatives to `patch'?
----
- Yes there are alternatives.
+Are there any alternatives to ``patch``?
+========================================
- You can use the `interdiff' program (http://cyberelk.net/tim/patchutils/) to
+
+Yes there are alternatives.
+
+You can use the ``interdiff`` program (http://cyberelk.net/tim/patchutils/) to
generate a patch representing the differences between two patches and then
apply the result.
-This will let you move from something like 2.6.12.2 to 2.6.12.3 in a single
+
+This will let you move from something like 4.7.2 to 4.7.3 in a single
step. The -z flag to interdiff will even let you feed it patches in gzip or
bzip2 compressed form directly without the use of zcat or bzcat or manual
decompression.
-Here's how you'd go from 2.6.12.2 to 2.6.12.3 in a single step:
- interdiff -z ../patch-2.6.12.2.bz2 ../patch-2.6.12.3.gz | patch -p1
+Here's how you'd go from 4.7.2 to 4.7.3 in a single step::
+
+ interdiff -z ../patch-4.7.2.gz ../patch-4.7.3.gz | patch -p1
Although interdiff may save you a step or two you are generally advised to
do the additional steps since interdiff can get things wrong in some cases.
- Another alternative is `ketchup', which is a python script for automatic
+Another alternative is ``ketchup``, which is a python script for automatic
downloading and applying of patches (http://www.selenic.com/ketchup/).
- Other nice tools are diffstat, which shows a summary of changes made by a
+Other nice tools are diffstat, which shows a summary of changes made by a
patch; lsdiff, which displays a short listing of affected files in a patch
file, along with (optionally) the line numbers of the start of each patch;
and grepdiff, which displays a list of the files modified by a patch where
@@ -211,99 +238,103 @@
Where can I download the patches?
----
- The patches are available at http://kernel.org/
+=================================
+
+The patches are available at http://kernel.org/
Most recent patches are linked from the front page, but they also have
specific homes.
-The 2.6.x.y (-stable) and 2.6.x patches live at
- ftp://ftp.kernel.org/pub/linux/kernel/v2.6/
+The 4.x.y (-stable) and 4.x patches live at
+
+ ftp://ftp.kernel.org/pub/linux/kernel/v4.x/
The -rc patches live at
- ftp://ftp.kernel.org/pub/linux/kernel/v2.6/testing/
-The -git patches live at
- ftp://ftp.kernel.org/pub/linux/kernel/v2.6/snapshots/
+ ftp://ftp.kernel.org/pub/linux/kernel/v4.x/testing/
-The -mm kernels live at
- ftp://ftp.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/
-
-In place of ftp.kernel.org you can use ftp.cc.kernel.org, where cc is a
+In place of ``ftp.kernel.org`` you can use ``ftp.cc.kernel.org``, where cc is a
country code. This way you'll be downloading from a mirror site that's most
likely geographically closer to you, resulting in faster downloads for you,
less bandwidth used globally and less load on the main kernel.org servers --
these are good things, so do use mirrors when possible.
-The 2.6.x kernels
----
- These are the base stable releases released by Linus. The highest numbered
+The 4.x kernels
+===============
+
+These are the base stable releases released by Linus. The highest numbered
release is the most recent.
If regressions or other serious flaws are found, then a -stable fix patch
-will be released (see below) on top of this base. Once a new 2.6.x base
+will be released (see below) on top of this base. Once a new 4.x base
kernel is released, a patch is made available that is a delta between the
-previous 2.6.x kernel and the new one.
+previous 4.x kernel and the new one.
-To apply a patch moving from 2.6.11 to 2.6.12, you'd do the following (note
-that such patches do *NOT* apply on top of 2.6.x.y kernels but on top of the
-base 2.6.x kernel -- if you need to move from 2.6.x.y to 2.6.x+1 you need to
-first revert the 2.6.x.y patch).
+To apply a patch moving from 4.6 to 4.7, you'd do the following (note
+that such patches do **NOT** apply on top of 4.x.y kernels but on top of the
+base 4.x kernel -- if you need to move from 4.x.y to 4.x+1 you need to
+first revert the 4.x.y patch).
-Here are some examples:
+Here are some examples::
-# moving from 2.6.11 to 2.6.12
-$ cd ~/linux-2.6.11 # change to kernel source dir
-$ patch -p1 < ../patch-2.6.12 # apply the 2.6.12 patch
-$ cd ..
-$ mv linux-2.6.11 linux-2.6.12 # rename source dir
+ # moving from 4.6 to 4.7
-# moving from 2.6.11.1 to 2.6.12
-$ cd ~/linux-2.6.11.1 # change to kernel source dir
-$ patch -p1 -R < ../patch-2.6.11.1 # revert the 2.6.11.1 patch
- # source dir is now 2.6.11
-$ patch -p1 < ../patch-2.6.12 # apply new 2.6.12 patch
-$ cd ..
-$ mv linux-2.6.11.1 linux-2.6.12 # rename source dir
+ $ cd ~/linux-4.6 # change to kernel source dir
+ $ patch -p1 < ../patch-4.7 # apply the 4.7 patch
+ $ cd ..
+ $ mv linux-4.6 linux-4.7 # rename source dir
+
+ # moving from 4.6.1 to 4.7
+
+ $ cd ~/linux-4.6.1 # change to kernel source dir
+ $ patch -p1 -R < ../patch-4.6.1 # revert the 4.6.1 patch
+ # source dir is now 4.6
+ $ patch -p1 < ../patch-4.7 # apply new 4.7 patch
+ $ cd ..
+ $ mv linux-4.6.1 linux-4.7 # rename source dir
-The 2.6.x.y kernels
----
- Kernels with 4-digit versions are -stable kernels. They contain small(ish)
+The 4.x.y kernels
+=================
+
+Kernels with 3-digit versions are -stable kernels. They contain small(ish)
critical fixes for security problems or significant regressions discovered
-in a given 2.6.x kernel.
+in a given 4.x kernel.
This is the recommended branch for users who want the most recent stable
kernel and are not interested in helping test development/experimental
versions.
-If no 2.6.x.y kernel is available, then the highest numbered 2.6.x kernel is
+If no 4.x.y kernel is available, then the highest numbered 4.x kernel is
the current stable kernel.
- note: the -stable team usually do make incremental patches available as well
+.. note::
+
+ The -stable team usually do make incremental patches available as well
as patches against the latest mainline release, but I only cover the
non-incremental ones below. The incremental ones can be found at
- ftp://ftp.kernel.org/pub/linux/kernel/v2.6/incr/
+ ftp://ftp.kernel.org/pub/linux/kernel/v4.x/incr/
-These patches are not incremental, meaning that for example the 2.6.12.3
-patch does not apply on top of the 2.6.12.2 kernel source, but rather on top
-of the base 2.6.12 kernel source .
-So, in order to apply the 2.6.12.3 patch to your existing 2.6.12.2 kernel
-source you have to first back out the 2.6.12.2 patch (so you are left with a
-base 2.6.12 kernel source) and then apply the new 2.6.12.3 patch.
+These patches are not incremental, meaning that for example the 4.7.3
+patch does not apply on top of the 4.7.2 kernel source, but rather on top
+of the base 4.7 kernel source.
-Here's a small example:
+So, in order to apply the 4.7.3 patch to your existing 4.7.2 kernel
+source you have to first back out the 4.7.2 patch (so you are left with a
+base 4.7 kernel source) and then apply the new 4.7.3 patch.
-$ cd ~/linux-2.6.12.2 # change into the kernel source dir
-$ patch -p1 -R < ../patch-2.6.12.2 # revert the 2.6.12.2 patch
-$ patch -p1 < ../patch-2.6.12.3 # apply the new 2.6.12.3 patch
-$ cd ..
-$ mv linux-2.6.12.2 linux-2.6.12.3 # rename the kernel source dir
+Here's a small example::
+ $ cd ~/linux-4.7.2 # change to the kernel source dir
+ $ patch -p1 -R < ../patch-4.7.2 # revert the 4.7.2 patch
+ $ patch -p1 < ../patch-4.7.3 # apply the new 4.7.3 patch
+ $ cd ..
+ $ mv linux-4.7.2 linux-4.7.3 # rename the kernel source dir
The -rc kernels
----
- These are release-candidate kernels. These are development kernels released
+===============
+
+These are release-candidate kernels. These are development kernels released
by Linus whenever he deems the current git (the kernel's source management
tool) tree to be in a reasonably sane state adequate for testing.
@@ -317,39 +348,44 @@
development kernels but do not want to run some of the really experimental
stuff (such people should see the sections about -git and -mm kernels below).
-The -rc patches are not incremental, they apply to a base 2.6.x kernel, just
-like the 2.6.x.y patches described above. The kernel version before the -rcN
+The -rc patches are not incremental, they apply to a base 4.x kernel, just
+like the 4.x.y patches described above. The kernel version before the -rcN
suffix denotes the version of the kernel that this -rc kernel will eventually
turn into.
-So, 2.6.13-rc5 means that this is the fifth release candidate for the 2.6.13
-kernel and the patch should be applied on top of the 2.6.12 kernel source.
-Here are 3 examples of how to apply these patches:
+So, 4.8-rc5 means that this is the fifth release candidate for the 4.8
+kernel and the patch should be applied on top of the 4.7 kernel source.
-# first an example of moving from 2.6.12 to 2.6.13-rc3
-$ cd ~/linux-2.6.12 # change into the 2.6.12 source dir
-$ patch -p1 < ../patch-2.6.13-rc3 # apply the 2.6.13-rc3 patch
-$ cd ..
-$ mv linux-2.6.12 linux-2.6.13-rc3 # rename the source dir
+Here are 3 examples of how to apply these patches::
-# now let's move from 2.6.13-rc3 to 2.6.13-rc5
-$ cd ~/linux-2.6.13-rc3 # change into the 2.6.13-rc3 dir
-$ patch -p1 -R < ../patch-2.6.13-rc3 # revert the 2.6.13-rc3 patch
-$ patch -p1 < ../patch-2.6.13-rc5 # apply the new 2.6.13-rc5 patch
-$ cd ..
-$ mv linux-2.6.13-rc3 linux-2.6.13-rc5 # rename the source dir
+ # first an example of moving from 4.7 to 4.8-rc3
-# finally let's try and move from 2.6.12.3 to 2.6.13-rc5
-$ cd ~/linux-2.6.12.3 # change to the kernel source dir
-$ patch -p1 -R < ../patch-2.6.12.3 # revert the 2.6.12.3 patch
-$ patch -p1 < ../patch-2.6.13-rc5 # apply new 2.6.13-rc5 patch
-$ cd ..
-$ mv linux-2.6.12.3 linux-2.6.13-rc5 # rename the kernel source dir
+ $ cd ~/linux-4.7 # change to the 4.7 source dir
+ $ patch -p1 < ../patch-4.8-rc3 # apply the 4.8-rc3 patch
+ $ cd ..
+ $ mv linux-4.7 linux-4.8-rc3 # rename the source dir
+
+ # now let's move from 4.8-rc3 to 4.8-rc5
+
+ $ cd ~/linux-4.8-rc3 # change to the 4.8-rc3 dir
+ $ patch -p1 -R < ../patch-4.8-rc3 # revert the 4.8-rc3 patch
+ $ patch -p1 < ../patch-4.8-rc5 # apply the new 4.8-rc5 patch
+ $ cd ..
+ $ mv linux-4.8-rc3 linux-4.8-rc5 # rename the source dir
+
+ # finally let's try and move from 4.7.3 to 4.8-rc5
+
+ $ cd ~/linux-4.7.3 # change to the kernel source dir
+ $ patch -p1 -R < ../patch-4.7.3 # revert the 4.7.3 patch
+ $ patch -p1 < ../patch-4.8-rc5 # apply new 4.8-rc5 patch
+ $ cd ..
+ $ mv linux-4.7.3 linux-4.8-rc5 # rename the kernel source dir
The -git kernels
----
- These are daily snapshots of Linus' kernel tree (managed in a git
+================
+
+These are daily snapshots of Linus' kernel tree (managed in a git
repository, hence the name).
These patches are usually released daily and represent the current state of
@@ -357,91 +393,66 @@
generated automatically without even a cursory glance to see if they are
sane.
--git patches are not incremental and apply either to a base 2.6.x kernel or
-a base 2.6.x-rc kernel -- you can see which from their name.
-A patch named 2.6.12-git1 applies to the 2.6.12 kernel source and a patch
-named 2.6.13-rc3-git2 applies to the source of the 2.6.13-rc3 kernel.
+-git patches are not incremental and apply either to a base 4.x kernel or
+a base 4.x-rc kernel -- you can see which from their name.
+A patch named 4.7-git1 applies to the 4.7 kernel source and a patch
+named 4.8-rc3-git2 applies to the source of the 4.8-rc3 kernel.
-Here are some examples of how to apply these patches:
+Here are some examples of how to apply these patches::
-# moving from 2.6.12 to 2.6.12-git1
-$ cd ~/linux-2.6.12 # change to the kernel source dir
-$ patch -p1 < ../patch-2.6.12-git1 # apply the 2.6.12-git1 patch
-$ cd ..
-$ mv linux-2.6.12 linux-2.6.12-git1 # rename the kernel source dir
+ # moving from 4.7 to 4.7-git1
-# moving from 2.6.12-git1 to 2.6.13-rc2-git3
-$ cd ~/linux-2.6.12-git1 # change to the kernel source dir
-$ patch -p1 -R < ../patch-2.6.12-git1 # revert the 2.6.12-git1 patch
- # we now have a 2.6.12 kernel
-$ patch -p1 < ../patch-2.6.13-rc2 # apply the 2.6.13-rc2 patch
- # the kernel is now 2.6.13-rc2
-$ patch -p1 < ../patch-2.6.13-rc2-git3 # apply the 2.6.13-rc2-git3 patch
- # the kernel is now 2.6.13-rc2-git3
-$ cd ..
-$ mv linux-2.6.12-git1 linux-2.6.13-rc2-git3 # rename source dir
+ $ cd ~/linux-4.7 # change to the kernel source dir
+ $ patch -p1 < ../patch-4.7-git1 # apply the 4.7-git1 patch
+ $ cd ..
+ $ mv linux-4.7 linux-4.7-git1 # rename the kernel source dir
+
+ # moving from 4.7-git1 to 4.8-rc2-git3
+
+ $ cd ~/linux-4.7-git1 # change to the kernel source dir
+ $ patch -p1 -R < ../patch-4.7-git1 # revert the 4.7-git1 patch
+ # we now have a 4.7 kernel
+ $ patch -p1 < ../patch-4.8-rc2 # apply the 4.8-rc2 patch
+ # the kernel is now 4.8-rc2
+ $ patch -p1 < ../patch-4.8-rc2-git3 # apply the 4.8-rc2-git3 patch
+ # the kernel is now 4.8-rc2-git3
+ $ cd ..
+ $ mv linux-4.7-git1 linux-4.8-rc2-git3 # rename source dir
-The -mm kernels
----
- These are experimental kernels released by Andrew Morton.
+The -mm patches and the linux-next tree
+=======================================
-The -mm tree serves as a sort of proving ground for new features and other
-experimental patches.
-Once a patch has proved its worth in -mm for a while Andrew pushes it on to
-Linus for inclusion in mainline.
+The -mm patches are experimental patches released by Andrew Morton.
-Although it's encouraged that patches flow to Linus via the -mm tree, this
-is not always enforced.
-Subsystem maintainers (or individuals) sometimes push their patches directly
-to Linus, even though (or after) they have been merged and tested in -mm (or
-sometimes even without prior testing in -mm).
+In the past, -mm tree were used to also test subsystem patches, but this
+function is now done via the
+:ref:`linux-next <https://www.kernel.org/doc/man-pages/linux-next.html>`
+tree. The Subsystem maintainers push their patches first to linux-next,
+and, during the merge window, sends them directly to Linus.
-You should generally strive to get your patches into mainline via -mm to
-ensure maximum testing.
+The -mm patches serve as a sort of proving ground for new features and other
+experimental patches that aren't merged via a subsystem tree.
+Once such patches has proved its worth in -mm for a while Andrew pushes
+it on to Linus for inclusion in mainline.
-This branch is in constant flux and contains many experimental features, a
+The linux-next tree is daily updated, and includes the -mm patches.
+Both are in constant flux and contains many experimental features, a
lot of debugging patches not appropriate for mainline etc., and is the most
experimental of the branches described in this document.
-These kernels are not appropriate for use on systems that are supposed to be
+These patches are not appropriate for use on systems that are supposed to be
stable and they are more risky to run than any of the other branches (make
sure you have up-to-date backups -- that goes for any experimental kernel but
-even more so for -mm kernels).
+even more so for -mm patches or using a Kernel from the linux-next tree).
-These kernels in addition to all the other experimental patches they contain
-usually also contain any changes in the mainline -git kernels available at
-the time of release.
+Testing of -mm patches and linux-next is greatly appreciated since the whole
+point of those are to weed out regressions, crashes, data corruption bugs,
+build breakage (and any other bug in general) before changes are merged into
+the more stable mainline Linus tree.
-Testing of -mm kernels is greatly appreciated since the whole point of the
-tree is to weed out regressions, crashes, data corruption bugs, build
-breakage (and any other bug in general) before changes are merged into the
-more stable mainline Linus tree.
-But testers of -mm should be aware that breakage in this tree is more common
-than in any other tree.
-
-The -mm kernels are not released on a fixed schedule, but usually a few -mm
-kernels are released in between each -rc kernel (1 to 3 is common).
-The -mm kernels apply to either a base 2.6.x kernel (when no -rc kernels
-have been released yet) or to a Linus -rc kernel.
-
-Here are some examples of applying the -mm patches:
-
-# moving from 2.6.12 to 2.6.12-mm1
-$ cd ~/linux-2.6.12 # change to the 2.6.12 source dir
-$ patch -p1 < ../2.6.12-mm1 # apply the 2.6.12-mm1 patch
-$ cd ..
-$ mv linux-2.6.12 linux-2.6.12-mm1 # rename the source appropriately
-
-# moving from 2.6.12-mm1 to 2.6.13-rc3-mm3
-$ cd ~/linux-2.6.12-mm1
-$ patch -p1 -R < ../2.6.12-mm1 # revert the 2.6.12-mm1 patch
- # we now have a 2.6.12 source
-$ patch -p1 < ../patch-2.6.13-rc3 # apply the 2.6.13-rc3 patch
- # we now have a 2.6.13-rc3 source
-$ patch -p1 < ../2.6.13-rc3-mm3 # apply the 2.6.13-rc3-mm3 patch
-$ cd ..
-$ mv linux-2.6.12-mm1 linux-2.6.13-rc3-mm3 # rename the source dir
+But testers of -mm and linux-next should be aware that breakages are
+more common than in any other tree.
This concludes this list of explanations of the various kernel trees.
diff --git a/Documentation/arm/00-INDEX b/Documentation/arm/00-INDEX
index dea011c..b6e69fd 100644
--- a/Documentation/arm/00-INDEX
+++ b/Documentation/arm/00-INDEX
@@ -8,8 +8,6 @@
- ARM Interrupt subsystem documentation
IXP4xx
- Intel IXP4xx Network processor.
-Makefile
- - Build sourcefiles as part of the Documentation-build for arm
Netwinder
- Netwinder specific documentation
Porting
diff --git a/Documentation/arm/sunxi/README b/Documentation/arm/sunxi/README
index e5a115f..cd02433 100644
--- a/Documentation/arm/sunxi/README
+++ b/Documentation/arm/sunxi/README
@@ -31,6 +31,8 @@
+ User Manual
http://dl.linux-sunxi.org/A13/A13%20User%20Manual%20-%20v1.2%20%282013-01-08%29.pdf
+ - Next Thing Co GR8 (sun5i)
+
* Dual ARM Cortex-A7 based SoCs
- Allwinner A20 (sun7i)
+ User Manual
@@ -73,4 +75,13 @@
* Octa ARM Cortex-A7 based SoCs
- Allwinner A83T
+ Datasheet
- http://dl.linux-sunxi.org/A83T/A83T_datasheet_Revision_1.1.pdf
+ https://github.com/allwinner-zh/documents/raw/master/A83T/A83T_Datasheet_v1.3_20150510.pdf
+ + User Manual
+ https://github.com/allwinner-zh/documents/raw/master/A83T/A83T_User_Manual_v1.5.1_20150513.pdf
+
+ * Quad ARM Cortex-A53 based SoCs
+ - Allwinner A64
+ + Datasheet
+ http://dl.linux-sunxi.org/A64/A64_Datasheet_V1.1.pdf
+ + User Manual
+ http://dl.linux-sunxi.org/A64/Allwinner%20A64%20User%20Manual%20v1.0.pdf
diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
index ccc6032..405da11 100644
--- a/Documentation/arm64/silicon-errata.txt
+++ b/Documentation/arm64/silicon-errata.txt
@@ -61,3 +61,5 @@
| Cavium | ThunderX GICv3 | #23154 | CAVIUM_ERRATUM_23154 |
| Cavium | ThunderX Core | #27456 | CAVIUM_ERRATUM_27456 |
| Cavium | ThunderX SMMUv2 | #27704 | N/A |
+| | | | |
+| Freescale/NXP | LS2080A/LS1043A | A-008585 | FSL_ERRATUM_A008585 |
diff --git a/Documentation/auxdisplay/Makefile b/Documentation/auxdisplay/Makefile
deleted file mode 100644
index ada4dac..0000000
--- a/Documentation/auxdisplay/Makefile
+++ /dev/null
@@ -1,7 +0,0 @@
-# List of programs to build
-hostprogs-y := cfag12864b-example
-
-# Tell kbuild to always build the programs
-always := $(hostprogs-y)
-
-HOSTCFLAGS_cfag12864b-example.o += -I$(objtree)/usr/include
diff --git a/Documentation/auxdisplay/cfag12864b b/Documentation/auxdisplay/cfag12864b
index eb7be39..12fd51b 100644
--- a/Documentation/auxdisplay/cfag12864b
+++ b/Documentation/auxdisplay/cfag12864b
@@ -101,5 +101,5 @@
Also, you can mmap the framebuffer: open & mmap, munmap & close...
which is the best option for most uses.
-Check Documentation/auxdisplay/cfag12864b-example.c
+Check samples/auxdisplay/cfag12864b-example.c
for a real working userspace complete program with usage examples.
diff --git a/Documentation/blackfin/00-INDEX b/Documentation/blackfin/00-INDEX
index c54fcdd..265a1ef 100644
--- a/Documentation/blackfin/00-INDEX
+++ b/Documentation/blackfin/00-INDEX
@@ -1,10 +1,6 @@
00-INDEX
- This file
-Makefile
- - Makefile for gptimers example file.
bfin-gpio-notes.txt
- Notes in developing/using bfin-gpio driver.
bfin-spi-notes.txt
- Notes for using bfin spi bus driver.
-gptimers-example.c
- - gptimers example
diff --git a/Documentation/blackfin/Makefile b/Documentation/blackfin/Makefile
deleted file mode 100644
index 6782c58..0000000
--- a/Documentation/blackfin/Makefile
+++ /dev/null
@@ -1,5 +0,0 @@
-ifneq ($(CONFIG_BLACKFIN),)
-ifneq ($(CONFIG_BFIN_GPTIMERS),)
-obj-m := gptimers-example.o
-endif
-endif
diff --git a/Documentation/block/biodoc.txt b/Documentation/block/biodoc.txt
index bcdb2b4..918e1e0 100644
--- a/Documentation/block/biodoc.txt
+++ b/Documentation/block/biodoc.txt
@@ -115,7 +115,7 @@
Various parameters that the generic i/o scheduler logic uses are set at
a per-queue level (e.g maximum request size, maximum number of segments in
-a scatter-gather list, hardsect size)
+a scatter-gather list, logical block size)
Some parameters that were earlier available as global arrays indexed by
major/minor are now directly associated with the queue. Some of these may
@@ -156,7 +156,7 @@
blk_queue_max_segment_size(q, max_seg_size)
Maximum size of a clustered segment, 64kB default.
- blk_queue_hardsect_size(q, hardsect_size)
+ blk_queue_logical_block_size(q, logical_block_size)
Lowest possible sector size that the hardware can operate
on, 512 bytes default.
diff --git a/Documentation/cec.txt b/Documentation/cec.txt
deleted file mode 100644
index 75155fe..0000000
--- a/Documentation/cec.txt
+++ /dev/null
@@ -1,267 +0,0 @@
-CEC Kernel Support
-==================
-
-The CEC framework provides a unified kernel interface for use with HDMI CEC
-hardware. It is designed to handle a multiple types of hardware (receivers,
-transmitters, USB dongles). The framework also gives the option to decide
-what to do in the kernel driver and what should be handled by userspace
-applications. In addition it integrates the remote control passthrough
-feature into the kernel's remote control framework.
-
-
-The CEC Protocol
-----------------
-
-The CEC protocol enables consumer electronic devices to communicate with each
-other through the HDMI connection. The protocol uses logical addresses in the
-communication. The logical address is strictly connected with the functionality
-provided by the device. The TV acting as the communication hub is always
-assigned address 0. The physical address is determined by the physical
-connection between devices.
-
-The CEC framework described here is up to date with the CEC 2.0 specification.
-It is documented in the HDMI 1.4 specification with the new 2.0 bits documented
-in the HDMI 2.0 specification. But for most of the features the freely available
-HDMI 1.3a specification is sufficient:
-
-http://www.microprocessor.org/HDMISpecification13a.pdf
-
-
-The Kernel Interface
-====================
-
-CEC Adapter
------------
-
-The struct cec_adapter represents the CEC adapter hardware. It is created by
-calling cec_allocate_adapter() and deleted by calling cec_delete_adapter():
-
-struct cec_adapter *cec_allocate_adapter(const struct cec_adap_ops *ops,
- void *priv, const char *name, u32 caps, u8 available_las,
- struct device *parent);
-void cec_delete_adapter(struct cec_adapter *adap);
-
-To create an adapter you need to pass the following information:
-
-ops: adapter operations which are called by the CEC framework and that you
-have to implement.
-
-priv: will be stored in adap->priv and can be used by the adapter ops.
-
-name: the name of the CEC adapter. Note: this name will be copied.
-
-caps: capabilities of the CEC adapter. These capabilities determine the
- capabilities of the hardware and which parts are to be handled
- by userspace and which parts are handled by kernelspace. The
- capabilities are returned by CEC_ADAP_G_CAPS.
-
-available_las: the number of simultaneous logical addresses that this
- adapter can handle. Must be 1 <= available_las <= CEC_MAX_LOG_ADDRS.
-
-parent: the parent device.
-
-
-To register the /dev/cecX device node and the remote control device (if
-CEC_CAP_RC is set) you call:
-
-int cec_register_adapter(struct cec_adapter *adap);
-
-To unregister the devices call:
-
-void cec_unregister_adapter(struct cec_adapter *adap);
-
-Note: if cec_register_adapter() fails, then call cec_delete_adapter() to
-clean up. But if cec_register_adapter() succeeded, then only call
-cec_unregister_adapter() to clean up, never cec_delete_adapter(). The
-unregister function will delete the adapter automatically once the last user
-of that /dev/cecX device has closed its file handle.
-
-
-Implementing the Low-Level CEC Adapter
---------------------------------------
-
-The following low-level adapter operations have to be implemented in
-your driver:
-
-struct cec_adap_ops {
- /* Low-level callbacks */
- int (*adap_enable)(struct cec_adapter *adap, bool enable);
- int (*adap_monitor_all_enable)(struct cec_adapter *adap, bool enable);
- int (*adap_log_addr)(struct cec_adapter *adap, u8 logical_addr);
- int (*adap_transmit)(struct cec_adapter *adap, u8 attempts,
- u32 signal_free_time, struct cec_msg *msg);
- void (*adap_log_status)(struct cec_adapter *adap);
-
- /* High-level callbacks */
- ...
-};
-
-The three low-level ops deal with various aspects of controlling the CEC adapter
-hardware:
-
-
-To enable/disable the hardware:
-
- int (*adap_enable)(struct cec_adapter *adap, bool enable);
-
-This callback enables or disables the CEC hardware. Enabling the CEC hardware
-means powering it up in a state where no logical addresses are claimed. This
-op assumes that the physical address (adap->phys_addr) is valid when enable is
-true and will not change while the CEC adapter remains enabled. The initial
-state of the CEC adapter after calling cec_allocate_adapter() is disabled.
-
-Note that adap_enable must return 0 if enable is false.
-
-
-To enable/disable the 'monitor all' mode:
-
- int (*adap_monitor_all_enable)(struct cec_adapter *adap, bool enable);
-
-If enabled, then the adapter should be put in a mode to also monitor messages
-that not for us. Not all hardware supports this and this function is only
-called if the CEC_CAP_MONITOR_ALL capability is set. This callback is optional
-(some hardware may always be in 'monitor all' mode).
-
-Note that adap_monitor_all_enable must return 0 if enable is false.
-
-
-To program a new logical address:
-
- int (*adap_log_addr)(struct cec_adapter *adap, u8 logical_addr);
-
-If logical_addr == CEC_LOG_ADDR_INVALID then all programmed logical addresses
-are to be erased. Otherwise the given logical address should be programmed.
-If the maximum number of available logical addresses is exceeded, then it
-should return -ENXIO. Once a logical address is programmed the CEC hardware
-can receive directed messages to that address.
-
-Note that adap_log_addr must return 0 if logical_addr is CEC_LOG_ADDR_INVALID.
-
-
-To transmit a new message:
-
- int (*adap_transmit)(struct cec_adapter *adap, u8 attempts,
- u32 signal_free_time, struct cec_msg *msg);
-
-This transmits a new message. The attempts argument is the suggested number of
-attempts for the transmit.
-
-The signal_free_time is the number of data bit periods that the adapter should
-wait when the line is free before attempting to send a message. This value
-depends on whether this transmit is a retry, a message from a new initiator or
-a new message for the same initiator. Most hardware will handle this
-automatically, but in some cases this information is needed.
-
-The CEC_FREE_TIME_TO_USEC macro can be used to convert signal_free_time to
-microseconds (one data bit period is 2.4 ms).
-
-
-To log the current CEC hardware status:
-
- void (*adap_status)(struct cec_adapter *adap, struct seq_file *file);
-
-This optional callback can be used to show the status of the CEC hardware.
-The status is available through debugfs: cat /sys/kernel/debug/cec/cecX/status
-
-
-Your adapter driver will also have to react to events (typically interrupt
-driven) by calling into the framework in the following situations:
-
-When a transmit finished (successfully or otherwise):
-
-void cec_transmit_done(struct cec_adapter *adap, u8 status, u8 arb_lost_cnt,
- u8 nack_cnt, u8 low_drive_cnt, u8 error_cnt);
-
-The status can be one of:
-
-CEC_TX_STATUS_OK: the transmit was successful.
-CEC_TX_STATUS_ARB_LOST: arbitration was lost: another CEC initiator
-took control of the CEC line and you lost the arbitration.
-CEC_TX_STATUS_NACK: the message was nacked (for a directed message) or
-acked (for a broadcast message). A retransmission is needed.
-CEC_TX_STATUS_LOW_DRIVE: low drive was detected on the CEC bus. This
-indicates that a follower detected an error on the bus and requested a
-retransmission.
-CEC_TX_STATUS_ERROR: some unspecified error occurred: this can be one of
-the previous two if the hardware cannot differentiate or something else
-entirely.
-CEC_TX_STATUS_MAX_RETRIES: could not transmit the message after
-trying multiple times. Should only be set by the driver if it has hardware
-support for retrying messages. If set, then the framework assumes that it
-doesn't have to make another attempt to transmit the message since the
-hardware did that already.
-
-The *_cnt arguments are the number of error conditions that were seen.
-This may be 0 if no information is available. Drivers that do not support
-hardware retry can just set the counter corresponding to the transmit error
-to 1, if the hardware does support retry then either set these counters to
-0 if the hardware provides no feedback of which errors occurred and how many
-times, or fill in the correct values as reported by the hardware.
-
-When a CEC message was received:
-
-void cec_received_msg(struct cec_adapter *adap, struct cec_msg *msg);
-
-Speaks for itself.
-
-Implementing the High-Level CEC Adapter
----------------------------------------
-
-The low-level operations drive the hardware, the high-level operations are
-CEC protocol driven. The following high-level callbacks are available:
-
-struct cec_adap_ops {
- /* Low-level callbacks */
- ...
-
- /* High-level CEC message callback */
- int (*received)(struct cec_adapter *adap, struct cec_msg *msg);
-};
-
-The received() callback allows the driver to optionally handle a newly
-received CEC message
-
- int (*received)(struct cec_adapter *adap, struct cec_msg *msg);
-
-If the driver wants to process a CEC message, then it can implement this
-callback. If it doesn't want to handle this message, then it should return
--ENOMSG, otherwise the CEC framework assumes it processed this message and
-it will not no anything with it.
-
-
-CEC framework functions
------------------------
-
-CEC Adapter drivers can call the following CEC framework functions:
-
-int cec_transmit_msg(struct cec_adapter *adap, struct cec_msg *msg,
- bool block);
-
-Transmit a CEC message. If block is true, then wait until the message has been
-transmitted, otherwise just queue it and return.
-
-void cec_s_phys_addr(struct cec_adapter *adap, u16 phys_addr, bool block);
-
-Change the physical address. This function will set adap->phys_addr and
-send an event if it has changed. If cec_s_log_addrs() has been called and
-the physical address has become valid, then the CEC framework will start
-claiming the logical addresses. If block is true, then this function won't
-return until this process has finished.
-
-When the physical address is set to a valid value the CEC adapter will
-be enabled (see the adap_enable op). When it is set to CEC_PHYS_ADDR_INVALID,
-then the CEC adapter will be disabled. If you change a valid physical address
-to another valid physical address, then this function will first set the
-address to CEC_PHYS_ADDR_INVALID before enabling the new physical address.
-
-int cec_s_log_addrs(struct cec_adapter *adap,
- struct cec_log_addrs *log_addrs, bool block);
-
-Claim the CEC logical addresses. Should never be called if CEC_CAP_LOG_ADDRS
-is set. If block is true, then wait until the logical addresses have been
-claimed, otherwise just queue it and return. To unconfigure all logical
-addresses call this function with log_addrs set to NULL or with
-log_addrs->num_log_addrs set to 0. The block argument is ignored when
-unconfiguring. This function will just return if the physical address is
-invalid. Once the physical address becomes valid, then the framework will
-attempt to claim these logical addresses.
diff --git a/Documentation/clk.txt b/Documentation/clk.txt
index 5c4bc4d..22f026a 100644
--- a/Documentation/clk.txt
+++ b/Documentation/clk.txt
@@ -31,24 +31,25 @@
hardware-specific bits for the hypothetical "foo" hardware.
Tying the two halves of this interface together is struct clk_hw, which
-is defined in struct clk_foo and pointed to within struct clk. This
+is defined in struct clk_foo and pointed to within struct clk_core. This
allows for easy navigation between the two discrete halves of the common
clock interface.
Part 2 - common data structures and api
-Below is the common struct clk definition from
-include/linux/clk-private.h, modified for brevity:
+Below is the common struct clk_core definition from
+drivers/clk/clk.c, modified for brevity:
- struct clk {
+ struct clk_core {
const char *name;
const struct clk_ops *ops;
struct clk_hw *hw;
- char **parent_names;
- struct clk **parents;
- struct clk *parent;
- struct hlist_head children;
- struct hlist_node child_node;
+ struct module *owner;
+ struct clk_core *parent;
+ const char **parent_names;
+ struct clk_core **parents;
+ u8 num_parents;
+ u8 new_parent_index;
...
};
@@ -56,16 +57,19 @@
api itself defines several driver-facing functions which operate on
struct clk. That api is documented in include/linux/clk.h.
-Platforms and devices utilizing the common struct clk use the struct
-clk_ops pointer in struct clk to perform the hardware-specific parts of
-the operations defined in clk.h:
+Platforms and devices utilizing the common struct clk_core use the struct
+clk_ops pointer in struct clk_core to perform the hardware-specific parts of
+the operations defined in clk-provider.h:
struct clk_ops {
int (*prepare)(struct clk_hw *hw);
void (*unprepare)(struct clk_hw *hw);
+ int (*is_prepared)(struct clk_hw *hw);
+ void (*unprepare_unused)(struct clk_hw *hw);
int (*enable)(struct clk_hw *hw);
void (*disable)(struct clk_hw *hw);
int (*is_enabled)(struct clk_hw *hw);
+ void (*disable_unused)(struct clk_hw *hw);
unsigned long (*recalc_rate)(struct clk_hw *hw,
unsigned long parent_rate);
long (*round_rate)(struct clk_hw *hw,
@@ -84,6 +88,8 @@
u8 index);
unsigned long (*recalc_accuracy)(struct clk_hw *hw,
unsigned long parent_accuracy);
+ int (*get_phase)(struct clk_hw *hw);
+ int (*set_phase)(struct clk_hw *hw, int degrees);
void (*init)(struct clk_hw *hw);
int (*debug_init)(struct clk_hw *hw,
struct dentry *dentry);
@@ -91,7 +97,7 @@
Part 3 - hardware clk implementations
-The strength of the common struct clk comes from its .ops and .hw pointers
+The strength of the common struct clk_core comes from its .ops and .hw pointers
which abstract the details of struct clk from the hardware-specific bits, and
vice versa. To illustrate consider the simple gateable clk implementation in
drivers/clk/clk-gate.c:
@@ -107,7 +113,7 @@
knowledge about which register and bit controls this clk's gating.
Nothing about clock topology or accounting, such as enable_count or
notifier_count, is needed here. That is all handled by the common
-framework code and struct clk.
+framework code and struct clk_core.
Let's walk through enabling this clk from driver code:
@@ -139,22 +145,18 @@
Note that to_clk_gate is defined as:
-#define to_clk_gate(_hw) container_of(_hw, struct clk_gate, clk)
+#define to_clk_gate(_hw) container_of(_hw, struct clk_gate, hw)
This pattern of abstraction is used for every clock hardware
representation.
Part 4 - supporting your own clk hardware
-When implementing support for a new type of clock it only necessary to
+When implementing support for a new type of clock it is only necessary to
include the following header:
#include <linux/clk-provider.h>
-include/linux/clk.h is included within that header and clk-private.h
-must never be included from the code which implements the operations for
-a clock. More on that below in Part 5.
-
To construct a clk hardware structure for your platform you must define
the following:
diff --git a/Documentation/coccinelle.txt b/Documentation/coccinelle.txt
deleted file mode 100644
index 01fb1da..0000000
--- a/Documentation/coccinelle.txt
+++ /dev/null
@@ -1,470 +0,0 @@
-Copyright 2010 Nicolas Palix <npalix@diku.dk>
-Copyright 2010 Julia Lawall <julia@diku.dk>
-Copyright 2010 Gilles Muller <Gilles.Muller@lip6.fr>
-
-
- Getting Coccinelle
-~~~~~~~~~~~~~~~~~~~~
-
-The semantic patches included in the kernel use features and options
-which are provided by Coccinelle version 1.0.0-rc11 and above.
-Using earlier versions will fail as the option names used by
-the Coccinelle files and coccicheck have been updated.
-
-Coccinelle is available through the package manager
-of many distributions, e.g. :
-
- - Debian
- - Fedora
- - Ubuntu
- - OpenSUSE
- - Arch Linux
- - NetBSD
- - FreeBSD
-
-
-You can get the latest version released from the Coccinelle homepage at
-http://coccinelle.lip6.fr/
-
-Information and tips about Coccinelle are also provided on the wiki
-pages at http://cocci.ekstranet.diku.dk/wiki/doku.php
-
-Once you have it, run the following command:
-
- ./configure
- make
-
-as a regular user, and install it with
-
- sudo make install
-
- Supplemental documentation
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-For supplemental documentation refer to the wiki:
-
-https://bottest.wiki.kernel.org/coccicheck
-
-The wiki documentation always refers to the linux-next version of the script.
-
- Using Coccinelle on the Linux kernel
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-A Coccinelle-specific target is defined in the top level
-Makefile. This target is named 'coccicheck' and calls the 'coccicheck'
-front-end in the 'scripts' directory.
-
-Four basic modes are defined: patch, report, context, and org. The mode to
-use is specified by setting the MODE variable with 'MODE=<mode>'.
-
-'patch' proposes a fix, when possible.
-
-'report' generates a list in the following format:
- file:line:column-column: message
-
-'context' highlights lines of interest and their context in a
-diff-like style.Lines of interest are indicated with '-'.
-
-'org' generates a report in the Org mode format of Emacs.
-
-Note that not all semantic patches implement all modes. For easy use
-of Coccinelle, the default mode is "report".
-
-Two other modes provide some common combinations of these modes.
-
-'chain' tries the previous modes in the order above until one succeeds.
-
-'rep+ctxt' runs successively the report mode and the context mode.
- It should be used with the C option (described later)
- which checks the code on a file basis.
-
-Examples:
- To make a report for every semantic patch, run the following command:
-
- make coccicheck MODE=report
-
- To produce patches, run:
-
- make coccicheck MODE=patch
-
-
-The coccicheck target applies every semantic patch available in the
-sub-directories of 'scripts/coccinelle' to the entire Linux kernel.
-
-For each semantic patch, a commit message is proposed. It gives a
-description of the problem being checked by the semantic patch, and
-includes a reference to Coccinelle.
-
-As any static code analyzer, Coccinelle produces false
-positives. Thus, reports must be carefully checked, and patches
-reviewed.
-
-To enable verbose messages set the V= variable, for example:
-
- make coccicheck MODE=report V=1
-
- Coccinelle parallelization
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-By default, coccicheck tries to run as parallel as possible. To change
-the parallelism, set the J= variable. For example, to run across 4 CPUs:
-
- make coccicheck MODE=report J=4
-
-As of Coccinelle 1.0.2 Coccinelle uses Ocaml parmap for parallelization,
-if support for this is detected you will benefit from parmap parallelization.
-
-When parmap is enabled coccicheck will enable dynamic load balancing by using
-'--chunksize 1' argument, this ensures we keep feeding threads with work
-one by one, so that we avoid the situation where most work gets done by only
-a few threads. With dynamic load balancing, if a thread finishes early we keep
-feeding it more work.
-
-When parmap is enabled, if an error occurs in Coccinelle, this error
-value is propagated back, the return value of the 'make coccicheck'
-captures this return value.
-
- Using Coccinelle with a single semantic patch
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The optional make variable COCCI can be used to check a single
-semantic patch. In that case, the variable must be initialized with
-the name of the semantic patch to apply.
-
-For instance:
-
- make coccicheck COCCI=<my_SP.cocci> MODE=patch
-or
- make coccicheck COCCI=<my_SP.cocci> MODE=report
-
-
- Controlling Which Files are Processed by Coccinelle
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-By default the entire kernel source tree is checked.
-
-To apply Coccinelle to a specific directory, M= can be used.
-For example, to check drivers/net/wireless/ one may write:
-
- make coccicheck M=drivers/net/wireless/
-
-To apply Coccinelle on a file basis, instead of a directory basis, the
-following command may be used:
-
- make C=1 CHECK="scripts/coccicheck"
-
-To check only newly edited code, use the value 2 for the C flag, i.e.
-
- make C=2 CHECK="scripts/coccicheck"
-
-In these modes, which works on a file basis, there is no information
-about semantic patches displayed, and no commit message proposed.
-
-This runs every semantic patch in scripts/coccinelle by default. The
-COCCI variable may additionally be used to only apply a single
-semantic patch as shown in the previous section.
-
-The "report" mode is the default. You can select another one with the
-MODE variable explained above.
-
- Debugging Coccinelle SmPL patches
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Using coccicheck is best as it provides in the spatch command line
-include options matching the options used when we compile the kernel.
-You can learn what these options are by using V=1, you could then
-manually run Coccinelle with debug options added.
-
-Alternatively you can debug running Coccinelle against SmPL patches
-by asking for stderr to be redirected to stderr, by default stderr
-is redirected to /dev/null, if you'd like to capture stderr you
-can specify the DEBUG_FILE="file.txt" option to coccicheck. For
-instance:
-
- rm -f cocci.err
- make coccicheck COCCI=scripts/coccinelle/free/kfree.cocci MODE=report DEBUG_FILE=cocci.err
- cat cocci.err
-
-You can use SPFLAGS to add debugging flags, for instance you may want to
-add both --profile --show-trying to SPFLAGS when debugging. For instance
-you may want to use:
-
- rm -f err.log
- export COCCI=scripts/coccinelle/misc/irqf_oneshot.cocci
- make coccicheck DEBUG_FILE="err.log" MODE=report SPFLAGS="--profile --show-trying" M=./drivers/mfd/arizona-irq.c
-
-err.log will now have the profiling information, while stdout will
-provide some progress information as Coccinelle moves forward with
-work.
-
-DEBUG_FILE support is only supported when using coccinelle >= 1.2.
-
- .cocciconfig support
-~~~~~~~~~~~~~~~~~~~~~~
-
-Coccinelle supports reading .cocciconfig for default Coccinelle options that
-should be used every time spatch is spawned, the order of precedence for
-variables for .cocciconfig is as follows:
-
- o Your current user's home directory is processed first
- o Your directory from which spatch is called is processed next
- o The directory provided with the --dir option is processed last, if used
-
-Since coccicheck runs through make, it naturally runs from the kernel
-proper dir, as such the second rule above would be implied for picking up a
-.cocciconfig when using 'make coccicheck'.
-
-'make coccicheck' also supports using M= targets.If you do not supply
-any M= target, it is assumed you want to target the entire kernel.
-The kernel coccicheck script has:
-
- if [ "$KBUILD_EXTMOD" = "" ] ; then
- OPTIONS="--dir $srctree $COCCIINCLUDE"
- else
- OPTIONS="--dir $KBUILD_EXTMOD $COCCIINCLUDE"
- fi
-
-KBUILD_EXTMOD is set when an explicit target with M= is used. For both cases
-the spatch --dir argument is used, as such third rule applies when whether M=
-is used or not, and when M= is used the target directory can have its own
-.cocciconfig file. When M= is not passed as an argument to coccicheck the
-target directory is the same as the directory from where spatch was called.
-
-If not using the kernel's coccicheck target, keep the above precedence
-order logic of .cocciconfig reading. If using the kernel's coccicheck target,
-override any of the kernel's .coccicheck's settings using SPFLAGS.
-
-We help Coccinelle when used against Linux with a set of sensible defaults
-options for Linux with our own Linux .cocciconfig. This hints to coccinelle
-git can be used for 'git grep' queries over coccigrep. A timeout of 200
-seconds should suffice for now.
-
-The options picked up by coccinelle when reading a .cocciconfig do not appear
-as arguments to spatch processes running on your system, to confirm what
-options will be used by Coccinelle run:
-
- spatch --print-options-only
-
-You can override with your own preferred index option by using SPFLAGS. Take
-note that when there are conflicting options Coccinelle takes precedence for
-the last options passed. Using .cocciconfig is possible to use idutils, however
-given the order of precedence followed by Coccinelle, since the kernel now
-carries its own .cocciconfig, you will need to use SPFLAGS to use idutils if
-desired. See below section "Additional flags" for more details on how to use
-idutils.
-
- Additional flags
-~~~~~~~~~~~~~~~~~~
-
-Additional flags can be passed to spatch through the SPFLAGS
-variable. This works as Coccinelle respects the last flags
-given to it when options are in conflict.
-
- make SPFLAGS=--use-glimpse coccicheck
-
-Coccinelle supports idutils as well but requires coccinelle >= 1.0.6.
-When no ID file is specified coccinelle assumes your ID database file
-is in the file .id-utils.index on the top level of the kernel, coccinelle
-carries a script scripts/idutils_index.sh which creates the database with
-
- mkid -i C --output .id-utils.index
-
-If you have another database filename you can also just symlink with this
-name.
-
- make SPFLAGS=--use-idutils coccicheck
-
-Alternatively you can specify the database filename explicitly, for
-instance:
-
- make SPFLAGS="--use-idutils /full-path/to/ID" coccicheck
-
-See spatch --help to learn more about spatch options.
-
-Note that the '--use-glimpse' and '--use-idutils' options
-require external tools for indexing the code. None of them is
-thus active by default. However, by indexing the code with
-one of these tools, and according to the cocci file used,
-spatch could proceed the entire code base more quickly.
-
- SmPL patch specific options
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-SmPL patches can have their own requirements for options passed
-to Coccinelle. SmPL patch specific options can be provided by
-providing them at the top of the SmPL patch, for instance:
-
-// Options: --no-includes --include-headers
-
- SmPL patch Coccinelle requirements
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-As Coccinelle features get added some more advanced SmPL patches
-may require newer versions of Coccinelle. If an SmPL patch requires
-at least a version of Coccinelle, this can be specified as follows,
-as an example if requiring at least Coccinelle >= 1.0.5:
-
-// Requires: 1.0.5
-
- Proposing new semantic patches
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-New semantic patches can be proposed and submitted by kernel
-developers. For sake of clarity, they should be organized in the
-sub-directories of 'scripts/coccinelle/'.
-
-
- Detailed description of the 'report' mode
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-'report' generates a list in the following format:
- file:line:column-column: message
-
-Example:
-
-Running
-
- make coccicheck MODE=report COCCI=scripts/coccinelle/api/err_cast.cocci
-
-will execute the following part of the SmPL script.
-
-<smpl>
-@r depends on !context && !patch && (org || report)@
-expression x;
-position p;
-@@
-
- ERR_PTR@p(PTR_ERR(x))
-
-@script:python depends on report@
-p << r.p;
-x << r.x;
-@@
-
-msg="ERR_CAST can be used with %s" % (x)
-coccilib.report.print_report(p[0], msg)
-</smpl>
-
-This SmPL excerpt generates entries on the standard output, as
-illustrated below:
-
-/home/user/linux/crypto/ctr.c:188:9-16: ERR_CAST can be used with alg
-/home/user/linux/crypto/authenc.c:619:9-16: ERR_CAST can be used with auth
-/home/user/linux/crypto/xts.c:227:9-16: ERR_CAST can be used with alg
-
-
- Detailed description of the 'patch' mode
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-When the 'patch' mode is available, it proposes a fix for each problem
-identified.
-
-Example:
-
-Running
- make coccicheck MODE=patch COCCI=scripts/coccinelle/api/err_cast.cocci
-
-will execute the following part of the SmPL script.
-
-<smpl>
-@ depends on !context && patch && !org && !report @
-expression x;
-@@
-
-- ERR_PTR(PTR_ERR(x))
-+ ERR_CAST(x)
-</smpl>
-
-This SmPL excerpt generates patch hunks on the standard output, as
-illustrated below:
-
-diff -u -p a/crypto/ctr.c b/crypto/ctr.c
---- a/crypto/ctr.c 2010-05-26 10:49:38.000000000 +0200
-+++ b/crypto/ctr.c 2010-06-03 23:44:49.000000000 +0200
-@@ -185,7 +185,7 @@ static struct crypto_instance *crypto_ct
- alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER,
- CRYPTO_ALG_TYPE_MASK);
- if (IS_ERR(alg))
-- return ERR_PTR(PTR_ERR(alg));
-+ return ERR_CAST(alg);
-
- /* Block size must be >= 4 bytes. */
- err = -EINVAL;
-
- Detailed description of the 'context' mode
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-'context' highlights lines of interest and their context
-in a diff-like style.
-
-NOTE: The diff-like output generated is NOT an applicable patch. The
- intent of the 'context' mode is to highlight the important lines
- (annotated with minus, '-') and gives some surrounding context
- lines around. This output can be used with the diff mode of
- Emacs to review the code.
-
-Example:
-
-Running
- make coccicheck MODE=context COCCI=scripts/coccinelle/api/err_cast.cocci
-
-will execute the following part of the SmPL script.
-
-<smpl>
-@ depends on context && !patch && !org && !report@
-expression x;
-@@
-
-* ERR_PTR(PTR_ERR(x))
-</smpl>
-
-This SmPL excerpt generates diff hunks on the standard output, as
-illustrated below:
-
-diff -u -p /home/user/linux/crypto/ctr.c /tmp/nothing
---- /home/user/linux/crypto/ctr.c 2010-05-26 10:49:38.000000000 +0200
-+++ /tmp/nothing
-@@ -185,7 +185,6 @@ static struct crypto_instance *crypto_ct
- alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER,
- CRYPTO_ALG_TYPE_MASK);
- if (IS_ERR(alg))
-- return ERR_PTR(PTR_ERR(alg));
-
- /* Block size must be >= 4 bytes. */
- err = -EINVAL;
-
- Detailed description of the 'org' mode
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-'org' generates a report in the Org mode format of Emacs.
-
-Example:
-
-Running
- make coccicheck MODE=org COCCI=scripts/coccinelle/api/err_cast.cocci
-
-will execute the following part of the SmPL script.
-
-<smpl>
-@r depends on !context && !patch && (org || report)@
-expression x;
-position p;
-@@
-
- ERR_PTR@p(PTR_ERR(x))
-
-@script:python depends on org@
-p << r.p;
-x << r.x;
-@@
-
-msg="ERR_CAST can be used with %s" % (x)
-msg_safe=msg.replace("[","@(").replace("]",")")
-coccilib.org.print_todo(p[0], msg_safe)
-</smpl>
-
-This SmPL excerpt generates Org entries on the standard output, as
-illustrated below:
-
-* TODO [[view:/home/user/linux/crypto/ctr.c::face=ovl-face1::linb=188::colb=9::cole=16][ERR_CAST can be used with alg]]
-* TODO [[view:/home/user/linux/crypto/authenc.c::face=ovl-face1::linb=619::colb=9::cole=16][ERR_CAST can be used with auth]]
-* TODO [[view:/home/user/linux/crypto/xts.c::face=ovl-face1::linb=227::colb=9::cole=16][ERR_CAST can be used with alg]]
diff --git a/Documentation/conf.py b/Documentation/conf.py
index 106ae9c..bf6f310 100644
--- a/Documentation/conf.py
+++ b/Documentation/conf.py
@@ -14,11 +14,17 @@
import sys
import os
+import sphinx
+
+# Get Sphinx version
+major, minor, patch = map(int, sphinx.__version__.split("."))
+
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('sphinx'))
+from load_config import loadConfig
# -- General configuration ------------------------------------------------
@@ -28,14 +34,13 @@
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
-extensions = ['kernel-doc', 'rstFlatTable', 'kernel_include']
+extensions = ['kernel-doc', 'rstFlatTable', 'kernel_include', 'cdomain']
-# Gracefully handle missing rst2pdf.
-try:
- import rst2pdf
- extensions += ['rst2pdf.pdfbuilder']
-except ImportError:
- pass
+# The name of the math extension changed on Sphinx 1.4
+if minor > 3:
+ extensions.append("sphinx.ext.imgmath")
+else:
+ extensions.append("sphinx.ext.pngmath")
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -252,23 +257,92 @@
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
-#'papersize': 'letterpaper',
+'papersize': 'a4paper',
# The font size ('10pt', '11pt' or '12pt').
-#'pointsize': '10pt',
-
-# Additional stuff for the LaTeX preamble.
-#'preamble': '',
+'pointsize': '8pt',
# Latex figure (float) alignment
#'figure_align': 'htbp',
+
+# Don't mangle with UTF-8 chars
+'inputenc': '',
+'utf8extra': '',
+
+# Additional stuff for the LaTeX preamble.
+ 'preamble': '''
+ % Adjust margins
+ \\usepackage[margin=0.5in, top=1in, bottom=1in]{geometry}
+
+ % Allow generate some pages in landscape
+ \\usepackage{lscape}
+
+ % Put notes in color and let them be inside a table
+ \\definecolor{NoteColor}{RGB}{204,255,255}
+ \\definecolor{WarningColor}{RGB}{255,204,204}
+ \\definecolor{AttentionColor}{RGB}{255,255,204}
+ \\definecolor{OtherColor}{RGB}{204,204,204}
+ \\newlength{\\mynoticelength}
+ \\makeatletter\\newenvironment{coloredbox}[1]{%
+ \\setlength{\\fboxrule}{1pt}
+ \\setlength{\\fboxsep}{7pt}
+ \\setlength{\\mynoticelength}{\\linewidth}
+ \\addtolength{\\mynoticelength}{-2\\fboxsep}
+ \\addtolength{\\mynoticelength}{-2\\fboxrule}
+ \\begin{lrbox}{\\@tempboxa}\\begin{minipage}{\\mynoticelength}}{\\end{minipage}\\end{lrbox}%
+ \\ifthenelse%
+ {\\equal{\\py@noticetype}{note}}%
+ {\\colorbox{NoteColor}{\\usebox{\\@tempboxa}}}%
+ {%
+ \\ifthenelse%
+ {\\equal{\\py@noticetype}{warning}}%
+ {\\colorbox{WarningColor}{\\usebox{\\@tempboxa}}}%
+ {%
+ \\ifthenelse%
+ {\\equal{\\py@noticetype}{attention}}%
+ {\\colorbox{AttentionColor}{\\usebox{\\@tempboxa}}}%
+ {\\colorbox{OtherColor}{\\usebox{\\@tempboxa}}}%
+ }%
+ }%
+ }\\makeatother
+
+ \\makeatletter
+ \\renewenvironment{notice}[2]{%
+ \\def\\py@noticetype{#1}
+ \\begin{coloredbox}{#1}
+ \\bf\\it
+ \\par\\strong{#2}
+ \\csname py@noticestart@#1\\endcsname
+ }
+ {
+ \\csname py@noticeend@\\py@noticetype\\endcsname
+ \\end{coloredbox}
+ }
+ \\makeatother
+
+ % Use some font with UTF-8 support with XeLaTeX
+ \\usepackage{fontspec}
+ \\setsansfont{DejaVu Serif}
+ \\setromanfont{DejaVu Sans}
+ \\setmonofont{DejaVu Sans Mono}
+
+ % To allow adjusting table sizes
+ \\usepackage{adjustbox}
+
+ '''
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
- (master_doc, 'TheLinuxKernel.tex', 'The Linux Kernel Documentation',
+ ('kernel-documentation', 'kernel-documentation.tex', 'The Linux Kernel Documentation',
+ 'The kernel development community', 'manual'),
+ ('development-process/index', 'development-process.tex', 'Linux Kernel Development Documentation',
+ 'The kernel development community', 'manual'),
+ ('gpu/index', 'gpu.tex', 'Linux GPU Driver Developer\'s Guide',
+ 'The kernel development community', 'manual'),
+ ('media/index', 'media.tex', 'Linux Media Subsystem Documentation',
'The kernel development community', 'manual'),
]
@@ -419,3 +493,9 @@
# line arguments.
kerneldoc_bin = '../scripts/kernel-doc'
kerneldoc_srctree = '..'
+
+# ------------------------------------------------------------------------------
+# Since loadConfig overwrites settings from the global namespace, it has to be
+# the last statement in the conf.py file
+# ------------------------------------------------------------------------------
+loadConfig(globals())
diff --git a/Documentation/dev-tools/coccinelle.rst b/Documentation/dev-tools/coccinelle.rst
new file mode 100644
index 0000000..4a64b4c
--- /dev/null
+++ b/Documentation/dev-tools/coccinelle.rst
@@ -0,0 +1,491 @@
+.. Copyright 2010 Nicolas Palix <npalix@diku.dk>
+.. Copyright 2010 Julia Lawall <julia@diku.dk>
+.. Copyright 2010 Gilles Muller <Gilles.Muller@lip6.fr>
+
+.. highlight:: none
+
+Coccinelle
+==========
+
+Coccinelle is a tool for pattern matching and text transformation that has
+many uses in kernel development, including the application of complex,
+tree-wide patches and detection of problematic programming patterns.
+
+Getting Coccinelle
+-------------------
+
+The semantic patches included in the kernel use features and options
+which are provided by Coccinelle version 1.0.0-rc11 and above.
+Using earlier versions will fail as the option names used by
+the Coccinelle files and coccicheck have been updated.
+
+Coccinelle is available through the package manager
+of many distributions, e.g. :
+
+ - Debian
+ - Fedora
+ - Ubuntu
+ - OpenSUSE
+ - Arch Linux
+ - NetBSD
+ - FreeBSD
+
+You can get the latest version released from the Coccinelle homepage at
+http://coccinelle.lip6.fr/
+
+Information and tips about Coccinelle are also provided on the wiki
+pages at http://cocci.ekstranet.diku.dk/wiki/doku.php
+
+Once you have it, run the following command::
+
+ ./configure
+ make
+
+as a regular user, and install it with::
+
+ sudo make install
+
+Supplemental documentation
+---------------------------
+
+For supplemental documentation refer to the wiki:
+
+https://bottest.wiki.kernel.org/coccicheck
+
+The wiki documentation always refers to the linux-next version of the script.
+
+Using Coccinelle on the Linux kernel
+------------------------------------
+
+A Coccinelle-specific target is defined in the top level
+Makefile. This target is named ``coccicheck`` and calls the ``coccicheck``
+front-end in the ``scripts`` directory.
+
+Four basic modes are defined: ``patch``, ``report``, ``context``, and
+``org``. The mode to use is specified by setting the MODE variable with
+``MODE=<mode>``.
+
+- ``patch`` proposes a fix, when possible.
+
+- ``report`` generates a list in the following format:
+ file:line:column-column: message
+
+- ``context`` highlights lines of interest and their context in a
+ diff-like style.Lines of interest are indicated with ``-``.
+
+- ``org`` generates a report in the Org mode format of Emacs.
+
+Note that not all semantic patches implement all modes. For easy use
+of Coccinelle, the default mode is "report".
+
+Two other modes provide some common combinations of these modes.
+
+- ``chain`` tries the previous modes in the order above until one succeeds.
+
+- ``rep+ctxt`` runs successively the report mode and the context mode.
+ It should be used with the C option (described later)
+ which checks the code on a file basis.
+
+Examples
+~~~~~~~~
+
+To make a report for every semantic patch, run the following command::
+
+ make coccicheck MODE=report
+
+To produce patches, run::
+
+ make coccicheck MODE=patch
+
+
+The coccicheck target applies every semantic patch available in the
+sub-directories of ``scripts/coccinelle`` to the entire Linux kernel.
+
+For each semantic patch, a commit message is proposed. It gives a
+description of the problem being checked by the semantic patch, and
+includes a reference to Coccinelle.
+
+As any static code analyzer, Coccinelle produces false
+positives. Thus, reports must be carefully checked, and patches
+reviewed.
+
+To enable verbose messages set the V= variable, for example::
+
+ make coccicheck MODE=report V=1
+
+Coccinelle parallelization
+---------------------------
+
+By default, coccicheck tries to run as parallel as possible. To change
+the parallelism, set the J= variable. For example, to run across 4 CPUs::
+
+ make coccicheck MODE=report J=4
+
+As of Coccinelle 1.0.2 Coccinelle uses Ocaml parmap for parallelization,
+if support for this is detected you will benefit from parmap parallelization.
+
+When parmap is enabled coccicheck will enable dynamic load balancing by using
+``--chunksize 1`` argument, this ensures we keep feeding threads with work
+one by one, so that we avoid the situation where most work gets done by only
+a few threads. With dynamic load balancing, if a thread finishes early we keep
+feeding it more work.
+
+When parmap is enabled, if an error occurs in Coccinelle, this error
+value is propagated back, the return value of the ``make coccicheck``
+captures this return value.
+
+Using Coccinelle with a single semantic patch
+---------------------------------------------
+
+The optional make variable COCCI can be used to check a single
+semantic patch. In that case, the variable must be initialized with
+the name of the semantic patch to apply.
+
+For instance::
+
+ make coccicheck COCCI=<my_SP.cocci> MODE=patch
+
+or::
+
+ make coccicheck COCCI=<my_SP.cocci> MODE=report
+
+
+Controlling Which Files are Processed by Coccinelle
+---------------------------------------------------
+
+By default the entire kernel source tree is checked.
+
+To apply Coccinelle to a specific directory, ``M=`` can be used.
+For example, to check drivers/net/wireless/ one may write::
+
+ make coccicheck M=drivers/net/wireless/
+
+To apply Coccinelle on a file basis, instead of a directory basis, the
+following command may be used::
+
+ make C=1 CHECK="scripts/coccicheck"
+
+To check only newly edited code, use the value 2 for the C flag, i.e.::
+
+ make C=2 CHECK="scripts/coccicheck"
+
+In these modes, which works on a file basis, there is no information
+about semantic patches displayed, and no commit message proposed.
+
+This runs every semantic patch in scripts/coccinelle by default. The
+COCCI variable may additionally be used to only apply a single
+semantic patch as shown in the previous section.
+
+The "report" mode is the default. You can select another one with the
+MODE variable explained above.
+
+Debugging Coccinelle SmPL patches
+---------------------------------
+
+Using coccicheck is best as it provides in the spatch command line
+include options matching the options used when we compile the kernel.
+You can learn what these options are by using V=1, you could then
+manually run Coccinelle with debug options added.
+
+Alternatively you can debug running Coccinelle against SmPL patches
+by asking for stderr to be redirected to stderr, by default stderr
+is redirected to /dev/null, if you'd like to capture stderr you
+can specify the ``DEBUG_FILE="file.txt"`` option to coccicheck. For
+instance::
+
+ rm -f cocci.err
+ make coccicheck COCCI=scripts/coccinelle/free/kfree.cocci MODE=report DEBUG_FILE=cocci.err
+ cat cocci.err
+
+You can use SPFLAGS to add debugging flags, for instance you may want to
+add both --profile --show-trying to SPFLAGS when debugging. For instance
+you may want to use::
+
+ rm -f err.log
+ export COCCI=scripts/coccinelle/misc/irqf_oneshot.cocci
+ make coccicheck DEBUG_FILE="err.log" MODE=report SPFLAGS="--profile --show-trying" M=./drivers/mfd/arizona-irq.c
+
+err.log will now have the profiling information, while stdout will
+provide some progress information as Coccinelle moves forward with
+work.
+
+DEBUG_FILE support is only supported when using coccinelle >= 1.2.
+
+.cocciconfig support
+--------------------
+
+Coccinelle supports reading .cocciconfig for default Coccinelle options that
+should be used every time spatch is spawned, the order of precedence for
+variables for .cocciconfig is as follows:
+
+- Your current user's home directory is processed first
+- Your directory from which spatch is called is processed next
+- The directory provided with the --dir option is processed last, if used
+
+Since coccicheck runs through make, it naturally runs from the kernel
+proper dir, as such the second rule above would be implied for picking up a
+.cocciconfig when using ``make coccicheck``.
+
+``make coccicheck`` also supports using M= targets.If you do not supply
+any M= target, it is assumed you want to target the entire kernel.
+The kernel coccicheck script has::
+
+ if [ "$KBUILD_EXTMOD" = "" ] ; then
+ OPTIONS="--dir $srctree $COCCIINCLUDE"
+ else
+ OPTIONS="--dir $KBUILD_EXTMOD $COCCIINCLUDE"
+ fi
+
+KBUILD_EXTMOD is set when an explicit target with M= is used. For both cases
+the spatch --dir argument is used, as such third rule applies when whether M=
+is used or not, and when M= is used the target directory can have its own
+.cocciconfig file. When M= is not passed as an argument to coccicheck the
+target directory is the same as the directory from where spatch was called.
+
+If not using the kernel's coccicheck target, keep the above precedence
+order logic of .cocciconfig reading. If using the kernel's coccicheck target,
+override any of the kernel's .coccicheck's settings using SPFLAGS.
+
+We help Coccinelle when used against Linux with a set of sensible defaults
+options for Linux with our own Linux .cocciconfig. This hints to coccinelle
+git can be used for ``git grep`` queries over coccigrep. A timeout of 200
+seconds should suffice for now.
+
+The options picked up by coccinelle when reading a .cocciconfig do not appear
+as arguments to spatch processes running on your system, to confirm what
+options will be used by Coccinelle run::
+
+ spatch --print-options-only
+
+You can override with your own preferred index option by using SPFLAGS. Take
+note that when there are conflicting options Coccinelle takes precedence for
+the last options passed. Using .cocciconfig is possible to use idutils, however
+given the order of precedence followed by Coccinelle, since the kernel now
+carries its own .cocciconfig, you will need to use SPFLAGS to use idutils if
+desired. See below section "Additional flags" for more details on how to use
+idutils.
+
+Additional flags
+----------------
+
+Additional flags can be passed to spatch through the SPFLAGS
+variable. This works as Coccinelle respects the last flags
+given to it when options are in conflict. ::
+
+ make SPFLAGS=--use-glimpse coccicheck
+
+Coccinelle supports idutils as well but requires coccinelle >= 1.0.6.
+When no ID file is specified coccinelle assumes your ID database file
+is in the file .id-utils.index on the top level of the kernel, coccinelle
+carries a script scripts/idutils_index.sh which creates the database with::
+
+ mkid -i C --output .id-utils.index
+
+If you have another database filename you can also just symlink with this
+name. ::
+
+ make SPFLAGS=--use-idutils coccicheck
+
+Alternatively you can specify the database filename explicitly, for
+instance::
+
+ make SPFLAGS="--use-idutils /full-path/to/ID" coccicheck
+
+See ``spatch --help`` to learn more about spatch options.
+
+Note that the ``--use-glimpse`` and ``--use-idutils`` options
+require external tools for indexing the code. None of them is
+thus active by default. However, by indexing the code with
+one of these tools, and according to the cocci file used,
+spatch could proceed the entire code base more quickly.
+
+SmPL patch specific options
+---------------------------
+
+SmPL patches can have their own requirements for options passed
+to Coccinelle. SmPL patch specific options can be provided by
+providing them at the top of the SmPL patch, for instance::
+
+ // Options: --no-includes --include-headers
+
+SmPL patch Coccinelle requirements
+----------------------------------
+
+As Coccinelle features get added some more advanced SmPL patches
+may require newer versions of Coccinelle. If an SmPL patch requires
+at least a version of Coccinelle, this can be specified as follows,
+as an example if requiring at least Coccinelle >= 1.0.5::
+
+ // Requires: 1.0.5
+
+Proposing new semantic patches
+-------------------------------
+
+New semantic patches can be proposed and submitted by kernel
+developers. For sake of clarity, they should be organized in the
+sub-directories of ``scripts/coccinelle/``.
+
+
+Detailed description of the ``report`` mode
+-------------------------------------------
+
+``report`` generates a list in the following format::
+
+ file:line:column-column: message
+
+Example
+~~~~~~~
+
+Running::
+
+ make coccicheck MODE=report COCCI=scripts/coccinelle/api/err_cast.cocci
+
+will execute the following part of the SmPL script::
+
+ <smpl>
+ @r depends on !context && !patch && (org || report)@
+ expression x;
+ position p;
+ @@
+
+ ERR_PTR@p(PTR_ERR(x))
+
+ @script:python depends on report@
+ p << r.p;
+ x << r.x;
+ @@
+
+ msg="ERR_CAST can be used with %s" % (x)
+ coccilib.report.print_report(p[0], msg)
+ </smpl>
+
+This SmPL excerpt generates entries on the standard output, as
+illustrated below::
+
+ /home/user/linux/crypto/ctr.c:188:9-16: ERR_CAST can be used with alg
+ /home/user/linux/crypto/authenc.c:619:9-16: ERR_CAST can be used with auth
+ /home/user/linux/crypto/xts.c:227:9-16: ERR_CAST can be used with alg
+
+
+Detailed description of the ``patch`` mode
+------------------------------------------
+
+When the ``patch`` mode is available, it proposes a fix for each problem
+identified.
+
+Example
+~~~~~~~
+
+Running::
+
+ make coccicheck MODE=patch COCCI=scripts/coccinelle/api/err_cast.cocci
+
+will execute the following part of the SmPL script::
+
+ <smpl>
+ @ depends on !context && patch && !org && !report @
+ expression x;
+ @@
+
+ - ERR_PTR(PTR_ERR(x))
+ + ERR_CAST(x)
+ </smpl>
+
+This SmPL excerpt generates patch hunks on the standard output, as
+illustrated below::
+
+ diff -u -p a/crypto/ctr.c b/crypto/ctr.c
+ --- a/crypto/ctr.c 2010-05-26 10:49:38.000000000 +0200
+ +++ b/crypto/ctr.c 2010-06-03 23:44:49.000000000 +0200
+ @@ -185,7 +185,7 @@ static struct crypto_instance *crypto_ct
+ alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER,
+ CRYPTO_ALG_TYPE_MASK);
+ if (IS_ERR(alg))
+ - return ERR_PTR(PTR_ERR(alg));
+ + return ERR_CAST(alg);
+
+ /* Block size must be >= 4 bytes. */
+ err = -EINVAL;
+
+Detailed description of the ``context`` mode
+--------------------------------------------
+
+``context`` highlights lines of interest and their context
+in a diff-like style.
+
+ **NOTE**: The diff-like output generated is NOT an applicable patch. The
+ intent of the ``context`` mode is to highlight the important lines
+ (annotated with minus, ``-``) and gives some surrounding context
+ lines around. This output can be used with the diff mode of
+ Emacs to review the code.
+
+Example
+~~~~~~~
+
+Running::
+
+ make coccicheck MODE=context COCCI=scripts/coccinelle/api/err_cast.cocci
+
+will execute the following part of the SmPL script::
+
+ <smpl>
+ @ depends on context && !patch && !org && !report@
+ expression x;
+ @@
+
+ * ERR_PTR(PTR_ERR(x))
+ </smpl>
+
+This SmPL excerpt generates diff hunks on the standard output, as
+illustrated below::
+
+ diff -u -p /home/user/linux/crypto/ctr.c /tmp/nothing
+ --- /home/user/linux/crypto/ctr.c 2010-05-26 10:49:38.000000000 +0200
+ +++ /tmp/nothing
+ @@ -185,7 +185,6 @@ static struct crypto_instance *crypto_ct
+ alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER,
+ CRYPTO_ALG_TYPE_MASK);
+ if (IS_ERR(alg))
+ - return ERR_PTR(PTR_ERR(alg));
+
+ /* Block size must be >= 4 bytes. */
+ err = -EINVAL;
+
+Detailed description of the ``org`` mode
+----------------------------------------
+
+``org`` generates a report in the Org mode format of Emacs.
+
+Example
+~~~~~~~
+
+Running::
+
+ make coccicheck MODE=org COCCI=scripts/coccinelle/api/err_cast.cocci
+
+will execute the following part of the SmPL script::
+
+ <smpl>
+ @r depends on !context && !patch && (org || report)@
+ expression x;
+ position p;
+ @@
+
+ ERR_PTR@p(PTR_ERR(x))
+
+ @script:python depends on org@
+ p << r.p;
+ x << r.x;
+ @@
+
+ msg="ERR_CAST can be used with %s" % (x)
+ msg_safe=msg.replace("[","@(").replace("]",")")
+ coccilib.org.print_todo(p[0], msg_safe)
+ </smpl>
+
+This SmPL excerpt generates Org entries on the standard output, as
+illustrated below::
+
+ * TODO [[view:/home/user/linux/crypto/ctr.c::face=ovl-face1::linb=188::colb=9::cole=16][ERR_CAST can be used with alg]]
+ * TODO [[view:/home/user/linux/crypto/authenc.c::face=ovl-face1::linb=619::colb=9::cole=16][ERR_CAST can be used with auth]]
+ * TODO [[view:/home/user/linux/crypto/xts.c::face=ovl-face1::linb=227::colb=9::cole=16][ERR_CAST can be used with alg]]
diff --git a/Documentation/dev-tools/gcov.rst b/Documentation/dev-tools/gcov.rst
new file mode 100644
index 0000000..19eedfe
--- /dev/null
+++ b/Documentation/dev-tools/gcov.rst
@@ -0,0 +1,256 @@
+Using gcov with the Linux kernel
+================================
+
+gcov profiling kernel support enables the use of GCC's coverage testing
+tool gcov_ with the Linux kernel. Coverage data of a running kernel
+is exported in gcov-compatible format via the "gcov" debugfs directory.
+To get coverage data for a specific file, change to the kernel build
+directory and use gcov with the ``-o`` option as follows (requires root)::
+
+ # cd /tmp/linux-out
+ # gcov -o /sys/kernel/debug/gcov/tmp/linux-out/kernel spinlock.c
+
+This will create source code files annotated with execution counts
+in the current directory. In addition, graphical gcov front-ends such
+as lcov_ can be used to automate the process of collecting data
+for the entire kernel and provide coverage overviews in HTML format.
+
+Possible uses:
+
+* debugging (has this line been reached at all?)
+* test improvement (how do I change my test to cover these lines?)
+* minimizing kernel configurations (do I need this option if the
+ associated code is never run?)
+
+.. _gcov: http://gcc.gnu.org/onlinedocs/gcc/Gcov.html
+.. _lcov: http://ltp.sourceforge.net/coverage/lcov.php
+
+
+Preparation
+-----------
+
+Configure the kernel with::
+
+ CONFIG_DEBUG_FS=y
+ CONFIG_GCOV_KERNEL=y
+
+select the gcc's gcov format, default is autodetect based on gcc version::
+
+ CONFIG_GCOV_FORMAT_AUTODETECT=y
+
+and to get coverage data for the entire kernel::
+
+ CONFIG_GCOV_PROFILE_ALL=y
+
+Note that kernels compiled with profiling flags will be significantly
+larger and run slower. Also CONFIG_GCOV_PROFILE_ALL may not be supported
+on all architectures.
+
+Profiling data will only become accessible once debugfs has been
+mounted::
+
+ mount -t debugfs none /sys/kernel/debug
+
+
+Customization
+-------------
+
+To enable profiling for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+- For a single file (e.g. main.o)::
+
+ GCOV_PROFILE_main.o := y
+
+- For all files in one directory::
+
+ GCOV_PROFILE := y
+
+To exclude files from being profiled even when CONFIG_GCOV_PROFILE_ALL
+is specified, use::
+
+ GCOV_PROFILE_main.o := n
+
+and::
+
+ GCOV_PROFILE := n
+
+Only files which are linked to the main kernel image or are compiled as
+kernel modules are supported by this mechanism.
+
+
+Files
+-----
+
+The gcov kernel support creates the following files in debugfs:
+
+``/sys/kernel/debug/gcov``
+ Parent directory for all gcov-related files.
+
+``/sys/kernel/debug/gcov/reset``
+ Global reset file: resets all coverage data to zero when
+ written to.
+
+``/sys/kernel/debug/gcov/path/to/compile/dir/file.gcda``
+ The actual gcov data file as understood by the gcov
+ tool. Resets file coverage data to zero when written to.
+
+``/sys/kernel/debug/gcov/path/to/compile/dir/file.gcno``
+ Symbolic link to a static data file required by the gcov
+ tool. This file is generated by gcc when compiling with
+ option ``-ftest-coverage``.
+
+
+Modules
+-------
+
+Kernel modules may contain cleanup code which is only run during
+module unload time. The gcov mechanism provides a means to collect
+coverage data for such code by keeping a copy of the data associated
+with the unloaded module. This data remains available through debugfs.
+Once the module is loaded again, the associated coverage counters are
+initialized with the data from its previous instantiation.
+
+This behavior can be deactivated by specifying the gcov_persist kernel
+parameter::
+
+ gcov_persist=0
+
+At run-time, a user can also choose to discard data for an unloaded
+module by writing to its data file or the global reset file.
+
+
+Separated build and test machines
+---------------------------------
+
+The gcov kernel profiling infrastructure is designed to work out-of-the
+box for setups where kernels are built and run on the same machine. In
+cases where the kernel runs on a separate machine, special preparations
+must be made, depending on where the gcov tool is used:
+
+a) gcov is run on the TEST machine
+
+ The gcov tool version on the test machine must be compatible with the
+ gcc version used for kernel build. Also the following files need to be
+ copied from build to test machine:
+
+ from the source tree:
+ - all C source files + headers
+
+ from the build tree:
+ - all C source files + headers
+ - all .gcda and .gcno files
+ - all links to directories
+
+ It is important to note that these files need to be placed into the
+ exact same file system location on the test machine as on the build
+ machine. If any of the path components is symbolic link, the actual
+ directory needs to be used instead (due to make's CURDIR handling).
+
+b) gcov is run on the BUILD machine
+
+ The following files need to be copied after each test case from test
+ to build machine:
+
+ from the gcov directory in sysfs:
+ - all .gcda files
+ - all links to .gcno files
+
+ These files can be copied to any location on the build machine. gcov
+ must then be called with the -o option pointing to that directory.
+
+ Example directory setup on the build machine::
+
+ /tmp/linux: kernel source tree
+ /tmp/out: kernel build directory as specified by make O=
+ /tmp/coverage: location of the files copied from the test machine
+
+ [user@build] cd /tmp/out
+ [user@build] gcov -o /tmp/coverage/tmp/out/init main.c
+
+
+Troubleshooting
+---------------
+
+Problem
+ Compilation aborts during linker step.
+
+Cause
+ Profiling flags are specified for source files which are not
+ linked to the main kernel or which are linked by a custom
+ linker procedure.
+
+Solution
+ Exclude affected source files from profiling by specifying
+ ``GCOV_PROFILE := n`` or ``GCOV_PROFILE_basename.o := n`` in the
+ corresponding Makefile.
+
+Problem
+ Files copied from sysfs appear empty or incomplete.
+
+Cause
+ Due to the way seq_file works, some tools such as cp or tar
+ may not correctly copy files from sysfs.
+
+Solution
+ Use ``cat``' to read ``.gcda`` files and ``cp -d`` to copy links.
+ Alternatively use the mechanism shown in Appendix B.
+
+
+Appendix A: gather_on_build.sh
+------------------------------
+
+Sample script to gather coverage meta files on the build machine
+(see 6a)::
+
+ #!/bin/bash
+
+ KSRC=$1
+ KOBJ=$2
+ DEST=$3
+
+ if [ -z "$KSRC" ] || [ -z "$KOBJ" ] || [ -z "$DEST" ]; then
+ echo "Usage: $0 <ksrc directory> <kobj directory> <output.tar.gz>" >&2
+ exit 1
+ fi
+
+ KSRC=$(cd $KSRC; printf "all:\n\t@echo \${CURDIR}\n" | make -f -)
+ KOBJ=$(cd $KOBJ; printf "all:\n\t@echo \${CURDIR}\n" | make -f -)
+
+ find $KSRC $KOBJ \( -name '*.gcno' -o -name '*.[ch]' -o -type l \) -a \
+ -perm /u+r,g+r | tar cfz $DEST -P -T -
+
+ if [ $? -eq 0 ] ; then
+ echo "$DEST successfully created, copy to test system and unpack with:"
+ echo " tar xfz $DEST -P"
+ else
+ echo "Could not create file $DEST"
+ fi
+
+
+Appendix B: gather_on_test.sh
+-----------------------------
+
+Sample script to gather coverage data files on the test machine
+(see 6b)::
+
+ #!/bin/bash -e
+
+ DEST=$1
+ GCDA=/sys/kernel/debug/gcov
+
+ if [ -z "$DEST" ] ; then
+ echo "Usage: $0 <output.tar.gz>" >&2
+ exit 1
+ fi
+
+ TEMPDIR=$(mktemp -d)
+ echo Collecting data..
+ find $GCDA -type d -exec mkdir -p $TEMPDIR/\{\} \;
+ find $GCDA -name '*.gcda' -exec sh -c 'cat < $0 > '$TEMPDIR'/$0' {} \;
+ find $GCDA -name '*.gcno' -exec sh -c 'cp -d $0 '$TEMPDIR'/$0' {} \;
+ tar czf $DEST -C $TEMPDIR sys
+ rm -rf $TEMPDIR
+
+ echo "$DEST successfully created, copy to build system and unpack with:"
+ echo " tar xfz $DEST"
diff --git a/Documentation/dev-tools/gdb-kernel-debugging.rst b/Documentation/dev-tools/gdb-kernel-debugging.rst
new file mode 100644
index 0000000..5e93c9b
--- /dev/null
+++ b/Documentation/dev-tools/gdb-kernel-debugging.rst
@@ -0,0 +1,173 @@
+.. highlight:: none
+
+Debugging kernel and modules via gdb
+====================================
+
+The kernel debugger kgdb, hypervisors like QEMU or JTAG-based hardware
+interfaces allow to debug the Linux kernel and its modules during runtime
+using gdb. Gdb comes with a powerful scripting interface for python. The
+kernel provides a collection of helper scripts that can simplify typical
+kernel debugging steps. This is a short tutorial about how to enable and use
+them. It focuses on QEMU/KVM virtual machines as target, but the examples can
+be transferred to the other gdb stubs as well.
+
+
+Requirements
+------------
+
+- gdb 7.2+ (recommended: 7.4+) with python support enabled (typically true
+ for distributions)
+
+
+Setup
+-----
+
+- Create a virtual Linux machine for QEMU/KVM (see www.linux-kvm.org and
+ www.qemu.org for more details). For cross-development,
+ http://landley.net/aboriginal/bin keeps a pool of machine images and
+ toolchains that can be helpful to start from.
+
+- Build the kernel with CONFIG_GDB_SCRIPTS enabled, but leave
+ CONFIG_DEBUG_INFO_REDUCED off. If your architecture supports
+ CONFIG_FRAME_POINTER, keep it enabled.
+
+- Install that kernel on the guest.
+ Alternatively, QEMU allows to boot the kernel directly using -kernel,
+ -append, -initrd command line switches. This is generally only useful if
+ you do not depend on modules. See QEMU documentation for more details on
+ this mode.
+
+- Enable the gdb stub of QEMU/KVM, either
+
+ - at VM startup time by appending "-s" to the QEMU command line
+
+ or
+
+ - during runtime by issuing "gdbserver" from the QEMU monitor
+ console
+
+- cd /path/to/linux-build
+
+- Start gdb: gdb vmlinux
+
+ Note: Some distros may restrict auto-loading of gdb scripts to known safe
+ directories. In case gdb reports to refuse loading vmlinux-gdb.py, add::
+
+ add-auto-load-safe-path /path/to/linux-build
+
+ to ~/.gdbinit. See gdb help for more details.
+
+- Attach to the booted guest::
+
+ (gdb) target remote :1234
+
+
+Examples of using the Linux-provided gdb helpers
+------------------------------------------------
+
+- Load module (and main kernel) symbols::
+
+ (gdb) lx-symbols
+ loading vmlinux
+ scanning for modules in /home/user/linux/build
+ loading @0xffffffffa0020000: /home/user/linux/build/net/netfilter/xt_tcpudp.ko
+ loading @0xffffffffa0016000: /home/user/linux/build/net/netfilter/xt_pkttype.ko
+ loading @0xffffffffa0002000: /home/user/linux/build/net/netfilter/xt_limit.ko
+ loading @0xffffffffa00ca000: /home/user/linux/build/net/packet/af_packet.ko
+ loading @0xffffffffa003c000: /home/user/linux/build/fs/fuse/fuse.ko
+ ...
+ loading @0xffffffffa0000000: /home/user/linux/build/drivers/ata/ata_generic.ko
+
+- Set a breakpoint on some not yet loaded module function, e.g.::
+
+ (gdb) b btrfs_init_sysfs
+ Function "btrfs_init_sysfs" not defined.
+ Make breakpoint pending on future shared library load? (y or [n]) y
+ Breakpoint 1 (btrfs_init_sysfs) pending.
+
+- Continue the target::
+
+ (gdb) c
+
+- Load the module on the target and watch the symbols being loaded as well as
+ the breakpoint hit::
+
+ loading @0xffffffffa0034000: /home/user/linux/build/lib/libcrc32c.ko
+ loading @0xffffffffa0050000: /home/user/linux/build/lib/lzo/lzo_compress.ko
+ loading @0xffffffffa006e000: /home/user/linux/build/lib/zlib_deflate/zlib_deflate.ko
+ loading @0xffffffffa01b1000: /home/user/linux/build/fs/btrfs/btrfs.ko
+
+ Breakpoint 1, btrfs_init_sysfs () at /home/user/linux/fs/btrfs/sysfs.c:36
+ 36 btrfs_kset = kset_create_and_add("btrfs", NULL, fs_kobj);
+
+- Dump the log buffer of the target kernel::
+
+ (gdb) lx-dmesg
+ [ 0.000000] Initializing cgroup subsys cpuset
+ [ 0.000000] Initializing cgroup subsys cpu
+ [ 0.000000] Linux version 3.8.0-rc4-dbg+ (...
+ [ 0.000000] Command line: root=/dev/sda2 resume=/dev/sda1 vga=0x314
+ [ 0.000000] e820: BIOS-provided physical RAM map:
+ [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
+ [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
+ ....
+
+- Examine fields of the current task struct::
+
+ (gdb) p $lx_current().pid
+ $1 = 4998
+ (gdb) p $lx_current().comm
+ $2 = "modprobe\000\000\000\000\000\000\000"
+
+- Make use of the per-cpu function for the current or a specified CPU::
+
+ (gdb) p $lx_per_cpu("runqueues").nr_running
+ $3 = 1
+ (gdb) p $lx_per_cpu("runqueues", 2).nr_running
+ $4 = 0
+
+- Dig into hrtimers using the container_of helper::
+
+ (gdb) set $next = $lx_per_cpu("hrtimer_bases").clock_base[0].active.next
+ (gdb) p *$container_of($next, "struct hrtimer", "node")
+ $5 = {
+ node = {
+ node = {
+ __rb_parent_color = 18446612133355256072,
+ rb_right = 0x0 <irq_stack_union>,
+ rb_left = 0x0 <irq_stack_union>
+ },
+ expires = {
+ tv64 = 1835268000000
+ }
+ },
+ _softexpires = {
+ tv64 = 1835268000000
+ },
+ function = 0xffffffff81078232 <tick_sched_timer>,
+ base = 0xffff88003fd0d6f0,
+ state = 1,
+ start_pid = 0,
+ start_site = 0xffffffff81055c1f <hrtimer_start_range_ns+20>,
+ start_comm = "swapper/2\000\000\000\000\000\000"
+ }
+
+
+List of commands and functions
+------------------------------
+
+The number of commands and convenience functions may evolve over the time,
+this is just a snapshot of the initial version::
+
+ (gdb) apropos lx
+ function lx_current -- Return current task
+ function lx_module -- Find module by name and return the module variable
+ function lx_per_cpu -- Return per-cpu variable
+ function lx_task_by_pid -- Find Linux task by PID and return the task_struct variable
+ function lx_thread_info -- Calculate Linux thread_info from task variable
+ lx-dmesg -- Print Linux kernel log buffer
+ lx-lsmod -- List currently loaded modules
+ lx-symbols -- (Re-)load symbols of Linux kernel and currently loaded modules
+
+Detailed help can be obtained via "help <command-name>" for commands and "help
+function <function-name>" for convenience functions.
diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst
new file mode 100644
index 0000000..f7a18f2
--- /dev/null
+++ b/Documentation/dev-tools/kasan.rst
@@ -0,0 +1,173 @@
+The Kernel Address Sanitizer (KASAN)
+====================================
+
+Overview
+--------
+
+KernelAddressSANitizer (KASAN) is a dynamic memory error detector. It provides
+a fast and comprehensive solution for finding use-after-free and out-of-bounds
+bugs.
+
+KASAN uses compile-time instrumentation for checking every memory access,
+therefore you will need a GCC version 4.9.2 or later. GCC 5.0 or later is
+required for detection of out-of-bounds accesses to stack or global variables.
+
+Currently KASAN is supported only for the x86_64 and arm64 architectures.
+
+Usage
+-----
+
+To enable KASAN configure kernel with::
+
+ CONFIG_KASAN = y
+
+and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline and
+inline are compiler instrumentation types. The former produces smaller binary
+the latter is 1.1 - 2 times faster. Inline instrumentation requires a GCC
+version 5.0 or later.
+
+KASAN works with both SLUB and SLAB memory allocators.
+For better bug detection and nicer reporting, enable CONFIG_STACKTRACE.
+
+To disable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+- For a single file (e.g. main.o)::
+
+ KASAN_SANITIZE_main.o := n
+
+- For all files in one directory::
+
+ KASAN_SANITIZE := n
+
+Error reports
+~~~~~~~~~~~~~
+
+A typical out of bounds access report looks like this::
+
+ ==================================================================
+ BUG: AddressSanitizer: out of bounds access in kmalloc_oob_right+0x65/0x75 [test_kasan] at addr ffff8800693bc5d3
+ Write of size 1 by task modprobe/1689
+ =============================================================================
+ BUG kmalloc-128 (Not tainted): kasan error
+ -----------------------------------------------------------------------------
+
+ Disabling lock debugging due to kernel taint
+ INFO: Allocated in kmalloc_oob_right+0x3d/0x75 [test_kasan] age=0 cpu=0 pid=1689
+ __slab_alloc+0x4b4/0x4f0
+ kmem_cache_alloc_trace+0x10b/0x190
+ kmalloc_oob_right+0x3d/0x75 [test_kasan]
+ init_module+0x9/0x47 [test_kasan]
+ do_one_initcall+0x99/0x200
+ load_module+0x2cb3/0x3b20
+ SyS_finit_module+0x76/0x80
+ system_call_fastpath+0x12/0x17
+ INFO: Slab 0xffffea0001a4ef00 objects=17 used=7 fp=0xffff8800693bd728 flags=0x100000000004080
+ INFO: Object 0xffff8800693bc558 @offset=1368 fp=0xffff8800693bc720
+
+ Bytes b4 ffff8800693bc548: 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ
+ Object ffff8800693bc558: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+ Object ffff8800693bc568: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+ Object ffff8800693bc578: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+ Object ffff8800693bc588: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+ Object ffff8800693bc598: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+ Object ffff8800693bc5a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+ Object ffff8800693bc5b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
+ Object ffff8800693bc5c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5 kkkkkkkkkkkkkkk.
+ Redzone ffff8800693bc5d8: cc cc cc cc cc cc cc cc ........
+ Padding ffff8800693bc718: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ
+ CPU: 0 PID: 1689 Comm: modprobe Tainted: G B 3.18.0-rc1-mm1+ #98
+ Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
+ ffff8800693bc000 0000000000000000 ffff8800693bc558 ffff88006923bb78
+ ffffffff81cc68ae 00000000000000f3 ffff88006d407600 ffff88006923bba8
+ ffffffff811fd848 ffff88006d407600 ffffea0001a4ef00 ffff8800693bc558
+ Call Trace:
+ [<ffffffff81cc68ae>] dump_stack+0x46/0x58
+ [<ffffffff811fd848>] print_trailer+0xf8/0x160
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff811ff0f5>] object_err+0x35/0x40
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffff8120b9fa>] kasan_report_error+0x38a/0x3f0
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffff8120b344>] ? kasan_unpoison_shadow+0x14/0x40
+ [<ffffffff8120a79f>] ? kasan_poison_shadow+0x2f/0x40
+ [<ffffffffa00026a7>] ? kmem_cache_oob+0xc3/0xc3 [test_kasan]
+ [<ffffffff8120a995>] __asan_store1+0x75/0xb0
+ [<ffffffffa0002601>] ? kmem_cache_oob+0x1d/0xc3 [test_kasan]
+ [<ffffffffa0002065>] ? kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa0002065>] kmalloc_oob_right+0x65/0x75 [test_kasan]
+ [<ffffffffa00026b0>] init_module+0x9/0x47 [test_kasan]
+ [<ffffffff810002d9>] do_one_initcall+0x99/0x200
+ [<ffffffff811e4e5c>] ? __vunmap+0xec/0x160
+ [<ffffffff81114f63>] load_module+0x2cb3/0x3b20
+ [<ffffffff8110fd70>] ? m_show+0x240/0x240
+ [<ffffffff81115f06>] SyS_finit_module+0x76/0x80
+ [<ffffffff81cd3129>] system_call_fastpath+0x12/0x17
+ Memory state around the buggy address:
+ ffff8800693bc300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc380: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 fc
+ ffff8800693bc400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc480: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc500: fc fc fc fc fc fc fc fc fc fc fc 00 00 00 00 00
+ >ffff8800693bc580: 00 00 00 00 00 00 00 00 00 00 03 fc fc fc fc fc
+ ^
+ ffff8800693bc600: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ ffff8800693bc700: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ffff8800693bc800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ ==================================================================
+
+The header of the report discribe what kind of bug happened and what kind of
+access caused it. It's followed by the description of the accessed slub object
+(see 'SLUB Debug output' section in Documentation/vm/slub.txt for details) and
+the description of the accessed memory page.
+
+In the last section the report shows memory state around the accessed address.
+Reading this part requires some understanding of how KASAN works.
+
+The state of each 8 aligned bytes of memory is encoded in one shadow byte.
+Those 8 bytes can be accessible, partially accessible, freed or be a redzone.
+We use the following encoding for each shadow byte: 0 means that all 8 bytes
+of the corresponding memory region are accessible; number N (1 <= N <= 7) means
+that the first N bytes are accessible, and other (8 - N) bytes are not;
+any negative value indicates that the entire 8-byte word is inaccessible.
+We use different negative values to distinguish between different kinds of
+inaccessible memory like redzones or freed memory (see mm/kasan/kasan.h).
+
+In the report above the arrows point to the shadow byte 03, which means that
+the accessed address is partially accessible.
+
+
+Implementation details
+----------------------
+
+From a high level, our approach to memory error detection is similar to that
+of kmemcheck: use shadow memory to record whether each byte of memory is safe
+to access, and use compile-time instrumentation to check shadow memory on each
+memory access.
+
+AddressSanitizer dedicates 1/8 of kernel memory to its shadow memory
+(e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and
+offset to translate a memory address to its corresponding shadow address.
+
+Here is the function which translates an address to its corresponding shadow
+address::
+
+ static inline void *kasan_mem_to_shadow(const void *addr)
+ {
+ return ((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
+ + KASAN_SHADOW_OFFSET;
+ }
+
+where ``KASAN_SHADOW_SCALE_SHIFT = 3``.
+
+Compile-time instrumentation used for checking memory accesses. Compiler inserts
+function calls (__asan_load*(addr), __asan_store*(addr)) before each memory
+access of size 1, 2, 4, 8 or 16. These functions check whether memory access is
+valid or not by checking corresponding shadow memory.
+
+GCC 5.0 has possibility to perform inline instrumentation. Instead of making
+function calls GCC directly inserts the code to check the shadow memory.
+This option significantly enlarges kernel but it gives x1.1-x2 performance
+boost over outline instrumented kernel.
diff --git a/Documentation/dev-tools/kcov.rst b/Documentation/dev-tools/kcov.rst
new file mode 100644
index 0000000..aca0e27
--- /dev/null
+++ b/Documentation/dev-tools/kcov.rst
@@ -0,0 +1,111 @@
+kcov: code coverage for fuzzing
+===============================
+
+kcov exposes kernel code coverage information in a form suitable for coverage-
+guided fuzzing (randomized testing). Coverage data of a running kernel is
+exported via the "kcov" debugfs file. Coverage collection is enabled on a task
+basis, and thus it can capture precise coverage of a single system call.
+
+Note that kcov does not aim to collect as much coverage as possible. It aims
+to collect more or less stable coverage that is function of syscall inputs.
+To achieve this goal it does not collect coverage in soft/hard interrupts
+and instrumentation of some inherently non-deterministic parts of kernel is
+disbled (e.g. scheduler, locking).
+
+Usage
+-----
+
+Configure the kernel with::
+
+ CONFIG_KCOV=y
+
+CONFIG_KCOV requires gcc built on revision 231296 or later.
+Profiling data will only become accessible once debugfs has been mounted::
+
+ mount -t debugfs none /sys/kernel/debug
+
+The following program demonstrates kcov usage from within a test program::
+
+ #include <stdio.h>
+ #include <stddef.h>
+ #include <stdint.h>
+ #include <stdlib.h>
+ #include <sys/types.h>
+ #include <sys/stat.h>
+ #include <sys/ioctl.h>
+ #include <sys/mman.h>
+ #include <unistd.h>
+ #include <fcntl.h>
+
+ #define KCOV_INIT_TRACE _IOR('c', 1, unsigned long)
+ #define KCOV_ENABLE _IO('c', 100)
+ #define KCOV_DISABLE _IO('c', 101)
+ #define COVER_SIZE (64<<10)
+
+ int main(int argc, char **argv)
+ {
+ int fd;
+ unsigned long *cover, n, i;
+
+ /* A single fd descriptor allows coverage collection on a single
+ * thread.
+ */
+ fd = open("/sys/kernel/debug/kcov", O_RDWR);
+ if (fd == -1)
+ perror("open"), exit(1);
+ /* Setup trace mode and trace size. */
+ if (ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE))
+ perror("ioctl"), exit(1);
+ /* Mmap buffer shared between kernel- and user-space. */
+ cover = (unsigned long*)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
+ PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+ if ((void*)cover == MAP_FAILED)
+ perror("mmap"), exit(1);
+ /* Enable coverage collection on the current thread. */
+ if (ioctl(fd, KCOV_ENABLE, 0))
+ perror("ioctl"), exit(1);
+ /* Reset coverage from the tail of the ioctl() call. */
+ __atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
+ /* That's the target syscal call. */
+ read(-1, NULL, 0);
+ /* Read number of PCs collected. */
+ n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
+ for (i = 0; i < n; i++)
+ printf("0x%lx\n", cover[i + 1]);
+ /* Disable coverage collection for the current thread. After this call
+ * coverage can be enabled for a different thread.
+ */
+ if (ioctl(fd, KCOV_DISABLE, 0))
+ perror("ioctl"), exit(1);
+ /* Free resources. */
+ if (munmap(cover, COVER_SIZE * sizeof(unsigned long)))
+ perror("munmap"), exit(1);
+ if (close(fd))
+ perror("close"), exit(1);
+ return 0;
+ }
+
+After piping through addr2line output of the program looks as follows::
+
+ SyS_read
+ fs/read_write.c:562
+ __fdget_pos
+ fs/file.c:774
+ __fget_light
+ fs/file.c:746
+ __fget_light
+ fs/file.c:750
+ __fget_light
+ fs/file.c:760
+ __fdget_pos
+ fs/file.c:784
+ SyS_read
+ fs/read_write.c:562
+
+If a program needs to collect coverage from several threads (independently),
+it needs to open /sys/kernel/debug/kcov in each thread separately.
+
+The interface is fine-grained to allow efficient forking of test processes.
+That is, a parent process opens /sys/kernel/debug/kcov, enables trace mode,
+mmaps coverage buffer and then forks child processes in a loop. Child processes
+only need to enable coverage (disable happens automatically on thread end).
diff --git a/Documentation/dev-tools/kmemcheck.rst b/Documentation/dev-tools/kmemcheck.rst
new file mode 100644
index 0000000..7f3d198
--- /dev/null
+++ b/Documentation/dev-tools/kmemcheck.rst
@@ -0,0 +1,733 @@
+Getting started with kmemcheck
+==============================
+
+Vegard Nossum <vegardno@ifi.uio.no>
+
+
+Introduction
+------------
+
+kmemcheck is a debugging feature for the Linux Kernel. More specifically, it
+is a dynamic checker that detects and warns about some uses of uninitialized
+memory.
+
+Userspace programmers might be familiar with Valgrind's memcheck. The main
+difference between memcheck and kmemcheck is that memcheck works for userspace
+programs only, and kmemcheck works for the kernel only. The implementations
+are of course vastly different. Because of this, kmemcheck is not as accurate
+as memcheck, but it turns out to be good enough in practice to discover real
+programmer errors that the compiler is not able to find through static
+analysis.
+
+Enabling kmemcheck on a kernel will probably slow it down to the extent that
+the machine will not be usable for normal workloads such as e.g. an
+interactive desktop. kmemcheck will also cause the kernel to use about twice
+as much memory as normal. For this reason, kmemcheck is strictly a debugging
+feature.
+
+
+Downloading
+-----------
+
+As of version 2.6.31-rc1, kmemcheck is included in the mainline kernel.
+
+
+Configuring and compiling
+-------------------------
+
+kmemcheck only works for the x86 (both 32- and 64-bit) platform. A number of
+configuration variables must have specific settings in order for the kmemcheck
+menu to even appear in "menuconfig". These are:
+
+- ``CONFIG_CC_OPTIMIZE_FOR_SIZE=n``
+ This option is located under "General setup" / "Optimize for size".
+
+ Without this, gcc will use certain optimizations that usually lead to
+ false positive warnings from kmemcheck. An example of this is a 16-bit
+ field in a struct, where gcc may load 32 bits, then discard the upper
+ 16 bits. kmemcheck sees only the 32-bit load, and may trigger a
+ warning for the upper 16 bits (if they're uninitialized).
+
+- ``CONFIG_SLAB=y`` or ``CONFIG_SLUB=y``
+ This option is located under "General setup" / "Choose SLAB
+ allocator".
+
+- ``CONFIG_FUNCTION_TRACER=n``
+ This option is located under "Kernel hacking" / "Tracers" / "Kernel
+ Function Tracer"
+
+ When function tracing is compiled in, gcc emits a call to another
+ function at the beginning of every function. This means that when the
+ page fault handler is called, the ftrace framework will be called
+ before kmemcheck has had a chance to handle the fault. If ftrace then
+ modifies memory that was tracked by kmemcheck, the result is an
+ endless recursive page fault.
+
+- ``CONFIG_DEBUG_PAGEALLOC=n``
+ This option is located under "Kernel hacking" / "Memory Debugging"
+ / "Debug page memory allocations".
+
+In addition, I highly recommend turning on ``CONFIG_DEBUG_INFO=y``. This is also
+located under "Kernel hacking". With this, you will be able to get line number
+information from the kmemcheck warnings, which is extremely valuable in
+debugging a problem. This option is not mandatory, however, because it slows
+down the compilation process and produces a much bigger kernel image.
+
+Now the kmemcheck menu should be visible (under "Kernel hacking" / "Memory
+Debugging" / "kmemcheck: trap use of uninitialized memory"). Here follows
+a description of the kmemcheck configuration variables:
+
+- ``CONFIG_KMEMCHECK``
+ This must be enabled in order to use kmemcheck at all...
+
+- ``CONFIG_KMEMCHECK_``[``DISABLED`` | ``ENABLED`` | ``ONESHOT``]``_BY_DEFAULT``
+ This option controls the status of kmemcheck at boot-time. "Enabled"
+ will enable kmemcheck right from the start, "disabled" will boot the
+ kernel as normal (but with the kmemcheck code compiled in, so it can
+ be enabled at run-time after the kernel has booted), and "one-shot" is
+ a special mode which will turn kmemcheck off automatically after
+ detecting the first use of uninitialized memory.
+
+ If you are using kmemcheck to actively debug a problem, then you
+ probably want to choose "enabled" here.
+
+ The one-shot mode is mostly useful in automated test setups because it
+ can prevent floods of warnings and increase the chances of the machine
+ surviving in case something is really wrong. In other cases, the one-
+ shot mode could actually be counter-productive because it would turn
+ itself off at the very first error -- in the case of a false positive
+ too -- and this would come in the way of debugging the specific
+ problem you were interested in.
+
+ If you would like to use your kernel as normal, but with a chance to
+ enable kmemcheck in case of some problem, it might be a good idea to
+ choose "disabled" here. When kmemcheck is disabled, most of the run-
+ time overhead is not incurred, and the kernel will be almost as fast
+ as normal.
+
+- ``CONFIG_KMEMCHECK_QUEUE_SIZE``
+ Select the maximum number of error reports to store in an internal
+ (fixed-size) buffer. Since errors can occur virtually anywhere and in
+ any context, we need a temporary storage area which is guaranteed not
+ to generate any other page faults when accessed. The queue will be
+ emptied as soon as a tasklet may be scheduled. If the queue is full,
+ new error reports will be lost.
+
+ The default value of 64 is probably fine. If some code produces more
+ than 64 errors within an irqs-off section, then the code is likely to
+ produce many, many more, too, and these additional reports seldom give
+ any more information (the first report is usually the most valuable
+ anyway).
+
+ This number might have to be adjusted if you are not using serial
+ console or similar to capture the kernel log. If you are using the
+ "dmesg" command to save the log, then getting a lot of kmemcheck
+ warnings might overflow the kernel log itself, and the earlier reports
+ will get lost in that way instead. Try setting this to 10 or so on
+ such a setup.
+
+- ``CONFIG_KMEMCHECK_SHADOW_COPY_SHIFT``
+ Select the number of shadow bytes to save along with each entry of the
+ error-report queue. These bytes indicate what parts of an allocation
+ are initialized, uninitialized, etc. and will be displayed when an
+ error is detected to help the debugging of a particular problem.
+
+ The number entered here is actually the logarithm of the number of
+ bytes that will be saved. So if you pick for example 5 here, kmemcheck
+ will save 2^5 = 32 bytes.
+
+ The default value should be fine for debugging most problems. It also
+ fits nicely within 80 columns.
+
+- ``CONFIG_KMEMCHECK_PARTIAL_OK``
+ This option (when enabled) works around certain GCC optimizations that
+ produce 32-bit reads from 16-bit variables where the upper 16 bits are
+ thrown away afterwards.
+
+ The default value (enabled) is recommended. This may of course hide
+ some real errors, but disabling it would probably produce a lot of
+ false positives.
+
+- ``CONFIG_KMEMCHECK_BITOPS_OK``
+ This option silences warnings that would be generated for bit-field
+ accesses where not all the bits are initialized at the same time. This
+ may also hide some real bugs.
+
+ This option is probably obsolete, or it should be replaced with
+ the kmemcheck-/bitfield-annotations for the code in question. The
+ default value is therefore fine.
+
+Now compile the kernel as usual.
+
+
+How to use
+----------
+
+Booting
+~~~~~~~
+
+First some information about the command-line options. There is only one
+option specific to kmemcheck, and this is called "kmemcheck". It can be used
+to override the default mode as chosen by the ``CONFIG_KMEMCHECK_*_BY_DEFAULT``
+option. Its possible settings are:
+
+- ``kmemcheck=0`` (disabled)
+- ``kmemcheck=1`` (enabled)
+- ``kmemcheck=2`` (one-shot mode)
+
+If SLUB debugging has been enabled in the kernel, it may take precedence over
+kmemcheck in such a way that the slab caches which are under SLUB debugging
+will not be tracked by kmemcheck. In order to ensure that this doesn't happen
+(even though it shouldn't by default), use SLUB's boot option ``slub_debug``,
+like this: ``slub_debug=-``
+
+In fact, this option may also be used for fine-grained control over SLUB vs.
+kmemcheck. For example, if the command line includes
+``kmemcheck=1 slub_debug=,dentry``, then SLUB debugging will be used only
+for the "dentry" slab cache, and with kmemcheck tracking all the other
+caches. This is advanced usage, however, and is not generally recommended.
+
+
+Run-time enable/disable
+~~~~~~~~~~~~~~~~~~~~~~~
+
+When the kernel has booted, it is possible to enable or disable kmemcheck at
+run-time. WARNING: This feature is still experimental and may cause false
+positive warnings to appear. Therefore, try not to use this. If you find that
+it doesn't work properly (e.g. you see an unreasonable amount of warnings), I
+will be happy to take bug reports.
+
+Use the file ``/proc/sys/kernel/kmemcheck`` for this purpose, e.g.::
+
+ $ echo 0 > /proc/sys/kernel/kmemcheck # disables kmemcheck
+
+The numbers are the same as for the ``kmemcheck=`` command-line option.
+
+
+Debugging
+~~~~~~~~~
+
+A typical report will look something like this::
+
+ WARNING: kmemcheck: Caught 32-bit read from uninitialized memory (ffff88003e4a2024)
+ 80000000000000000000000000000000000000000088ffff0000000000000000
+ i i i i u u u u i i i i i i i i u u u u u u u u u u u u u u u u
+ ^
+
+ Pid: 1856, comm: ntpdate Not tainted 2.6.29-rc5 #264 945P-A
+ RIP: 0010:[<ffffffff8104ede8>] [<ffffffff8104ede8>] __dequeue_signal+0xc8/0x190
+ RSP: 0018:ffff88003cdf7d98 EFLAGS: 00210002
+ RAX: 0000000000000030 RBX: ffff88003d4ea968 RCX: 0000000000000009
+ RDX: ffff88003e5d6018 RSI: ffff88003e5d6024 RDI: ffff88003cdf7e84
+ RBP: ffff88003cdf7db8 R08: ffff88003e5d6000 R09: 0000000000000000
+ R10: 0000000000000080 R11: 0000000000000000 R12: 000000000000000e
+ R13: ffff88003cdf7e78 R14: ffff88003d530710 R15: ffff88003d5a98c8
+ FS: 0000000000000000(0000) GS:ffff880001982000(0063) knlGS:00000
+ CS: 0010 DS: 002b ES: 002b CR0: 0000000080050033
+ CR2: ffff88003f806ea0 CR3: 000000003c036000 CR4: 00000000000006a0
+ DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
+ DR3: 0000000000000000 DR6: 00000000ffff4ff0 DR7: 0000000000000400
+ [<ffffffff8104f04e>] dequeue_signal+0x8e/0x170
+ [<ffffffff81050bd8>] get_signal_to_deliver+0x98/0x390
+ [<ffffffff8100b87d>] do_notify_resume+0xad/0x7d0
+ [<ffffffff8100c7b5>] int_signal+0x12/0x17
+ [<ffffffffffffffff>] 0xffffffffffffffff
+
+The single most valuable information in this report is the RIP (or EIP on 32-
+bit) value. This will help us pinpoint exactly which instruction that caused
+the warning.
+
+If your kernel was compiled with ``CONFIG_DEBUG_INFO=y``, then all we have to do
+is give this address to the addr2line program, like this::
+
+ $ addr2line -e vmlinux -i ffffffff8104ede8
+ arch/x86/include/asm/string_64.h:12
+ include/asm-generic/siginfo.h:287
+ kernel/signal.c:380
+ kernel/signal.c:410
+
+The "``-e vmlinux``" tells addr2line which file to look in. **IMPORTANT:**
+This must be the vmlinux of the kernel that produced the warning in the
+first place! If not, the line number information will almost certainly be
+wrong.
+
+The "``-i``" tells addr2line to also print the line numbers of inlined
+functions. In this case, the flag was very important, because otherwise,
+it would only have printed the first line, which is just a call to
+``memcpy()``, which could be called from a thousand places in the kernel, and
+is therefore not very useful. These inlined functions would not show up in
+the stack trace above, simply because the kernel doesn't load the extra
+debugging information. This technique can of course be used with ordinary
+kernel oopses as well.
+
+In this case, it's the caller of ``memcpy()`` that is interesting, and it can be
+found in ``include/asm-generic/siginfo.h``, line 287::
+
+ 281 static inline void copy_siginfo(struct siginfo *to, struct siginfo *from)
+ 282 {
+ 283 if (from->si_code < 0)
+ 284 memcpy(to, from, sizeof(*to));
+ 285 else
+ 286 /* _sigchld is currently the largest know union member */
+ 287 memcpy(to, from, __ARCH_SI_PREAMBLE_SIZE + sizeof(from->_sifields._sigchld));
+ 288 }
+
+Since this was a read (kmemcheck usually warns about reads only, though it can
+warn about writes to unallocated or freed memory as well), it was probably the
+"from" argument which contained some uninitialized bytes. Following the chain
+of calls, we move upwards to see where "from" was allocated or initialized,
+``kernel/signal.c``, line 380::
+
+ 359 static void collect_signal(int sig, struct sigpending *list, siginfo_t *info)
+ 360 {
+ ...
+ 367 list_for_each_entry(q, &list->list, list) {
+ 368 if (q->info.si_signo == sig) {
+ 369 if (first)
+ 370 goto still_pending;
+ 371 first = q;
+ ...
+ 377 if (first) {
+ 378 still_pending:
+ 379 list_del_init(&first->list);
+ 380 copy_siginfo(info, &first->info);
+ 381 __sigqueue_free(first);
+ ...
+ 392 }
+ 393 }
+
+Here, it is ``&first->info`` that is being passed on to ``copy_siginfo()``. The
+variable ``first`` was found on a list -- passed in as the second argument to
+``collect_signal()``. We continue our journey through the stack, to figure out
+where the item on "list" was allocated or initialized. We move to line 410::
+
+ 395 static int __dequeue_signal(struct sigpending *pending, sigset_t *mask,
+ 396 siginfo_t *info)
+ 397 {
+ ...
+ 410 collect_signal(sig, pending, info);
+ ...
+ 414 }
+
+Now we need to follow the ``pending`` pointer, since that is being passed on to
+``collect_signal()`` as ``list``. At this point, we've run out of lines from the
+"addr2line" output. Not to worry, we just paste the next addresses from the
+kmemcheck stack dump, i.e.::
+
+ [<ffffffff8104f04e>] dequeue_signal+0x8e/0x170
+ [<ffffffff81050bd8>] get_signal_to_deliver+0x98/0x390
+ [<ffffffff8100b87d>] do_notify_resume+0xad/0x7d0
+ [<ffffffff8100c7b5>] int_signal+0x12/0x17
+
+ $ addr2line -e vmlinux -i ffffffff8104f04e ffffffff81050bd8 \
+ ffffffff8100b87d ffffffff8100c7b5
+ kernel/signal.c:446
+ kernel/signal.c:1806
+ arch/x86/kernel/signal.c:805
+ arch/x86/kernel/signal.c:871
+ arch/x86/kernel/entry_64.S:694
+
+Remember that since these addresses were found on the stack and not as the
+RIP value, they actually point to the _next_ instruction (they are return
+addresses). This becomes obvious when we look at the code for line 446::
+
+ 422 int dequeue_signal(struct task_struct *tsk, sigset_t *mask, siginfo_t *info)
+ 423 {
+ ...
+ 431 signr = __dequeue_signal(&tsk->signal->shared_pending,
+ 432 mask, info);
+ 433 /*
+ 434 * itimer signal ?
+ 435 *
+ 436 * itimers are process shared and we restart periodic
+ 437 * itimers in the signal delivery path to prevent DoS
+ 438 * attacks in the high resolution timer case. This is
+ 439 * compliant with the old way of self restarting
+ 440 * itimers, as the SIGALRM is a legacy signal and only
+ 441 * queued once. Changing the restart behaviour to
+ 442 * restart the timer in the signal dequeue path is
+ 443 * reducing the timer noise on heavy loaded !highres
+ 444 * systems too.
+ 445 */
+ 446 if (unlikely(signr == SIGALRM)) {
+ ...
+ 489 }
+
+So instead of looking at 446, we should be looking at 431, which is the line
+that executes just before 446. Here we see that what we are looking for is
+``&tsk->signal->shared_pending``.
+
+Our next task is now to figure out which function that puts items on this
+``shared_pending`` list. A crude, but efficient tool, is ``git grep``::
+
+ $ git grep -n 'shared_pending' kernel/
+ ...
+ kernel/signal.c:828: pending = group ? &t->signal->shared_pending : &t->pending;
+ kernel/signal.c:1339: pending = group ? &t->signal->shared_pending : &t->pending;
+ ...
+
+There were more results, but none of them were related to list operations,
+and these were the only assignments. We inspect the line numbers more closely
+and find that this is indeed where items are being added to the list::
+
+ 816 static int send_signal(int sig, struct siginfo *info, struct task_struct *t,
+ 817 int group)
+ 818 {
+ ...
+ 828 pending = group ? &t->signal->shared_pending : &t->pending;
+ ...
+ 851 q = __sigqueue_alloc(t, GFP_ATOMIC, (sig < SIGRTMIN &&
+ 852 (is_si_special(info) ||
+ 853 info->si_code >= 0)));
+ 854 if (q) {
+ 855 list_add_tail(&q->list, &pending->list);
+ ...
+ 890 }
+
+and::
+
+ 1309 int send_sigqueue(struct sigqueue *q, struct task_struct *t, int group)
+ 1310 {
+ ....
+ 1339 pending = group ? &t->signal->shared_pending : &t->pending;
+ 1340 list_add_tail(&q->list, &pending->list);
+ ....
+ 1347 }
+
+In the first case, the list element we are looking for, ``q``, is being
+returned from the function ``__sigqueue_alloc()``, which looks like an
+allocation function. Let's take a look at it::
+
+ 187 static struct sigqueue *__sigqueue_alloc(struct task_struct *t, gfp_t flags,
+ 188 int override_rlimit)
+ 189 {
+ 190 struct sigqueue *q = NULL;
+ 191 struct user_struct *user;
+ 192
+ 193 /*
+ 194 * We won't get problems with the target's UID changing under us
+ 195 * because changing it requires RCU be used, and if t != current, the
+ 196 * caller must be holding the RCU readlock (by way of a spinlock) and
+ 197 * we use RCU protection here
+ 198 */
+ 199 user = get_uid(__task_cred(t)->user);
+ 200 atomic_inc(&user->sigpending);
+ 201 if (override_rlimit ||
+ 202 atomic_read(&user->sigpending) <=
+ 203 t->signal->rlim[RLIMIT_SIGPENDING].rlim_cur)
+ 204 q = kmem_cache_alloc(sigqueue_cachep, flags);
+ 205 if (unlikely(q == NULL)) {
+ 206 atomic_dec(&user->sigpending);
+ 207 free_uid(user);
+ 208 } else {
+ 209 INIT_LIST_HEAD(&q->list);
+ 210 q->flags = 0;
+ 211 q->user = user;
+ 212 }
+ 213
+ 214 return q;
+ 215 }
+
+We see that this function initializes ``q->list``, ``q->flags``, and
+``q->user``. It seems that now is the time to look at the definition of
+``struct sigqueue``, e.g.::
+
+ 14 struct sigqueue {
+ 15 struct list_head list;
+ 16 int flags;
+ 17 siginfo_t info;
+ 18 struct user_struct *user;
+ 19 };
+
+And, you might remember, it was a ``memcpy()`` on ``&first->info`` that
+caused the warning, so this makes perfect sense. It also seems reasonable
+to assume that it is the caller of ``__sigqueue_alloc()`` that has the
+responsibility of filling out (initializing) this member.
+
+But just which fields of the struct were uninitialized? Let's look at
+kmemcheck's report again::
+
+ WARNING: kmemcheck: Caught 32-bit read from uninitialized memory (ffff88003e4a2024)
+ 80000000000000000000000000000000000000000088ffff0000000000000000
+ i i i i u u u u i i i i i i i i u u u u u u u u u u u u u u u u
+ ^
+
+These first two lines are the memory dump of the memory object itself, and
+the shadow bytemap, respectively. The memory object itself is in this case
+``&first->info``. Just beware that the start of this dump is NOT the start
+of the object itself! The position of the caret (^) corresponds with the
+address of the read (ffff88003e4a2024).
+
+The shadow bytemap dump legend is as follows:
+
+- i: initialized
+- u: uninitialized
+- a: unallocated (memory has been allocated by the slab layer, but has not
+ yet been handed off to anybody)
+- f: freed (memory has been allocated by the slab layer, but has been freed
+ by the previous owner)
+
+In order to figure out where (relative to the start of the object) the
+uninitialized memory was located, we have to look at the disassembly. For
+that, we'll need the RIP address again::
+
+ RIP: 0010:[<ffffffff8104ede8>] [<ffffffff8104ede8>] __dequeue_signal+0xc8/0x190
+
+ $ objdump -d --no-show-raw-insn vmlinux | grep -C 8 ffffffff8104ede8:
+ ffffffff8104edc8: mov %r8,0x8(%r8)
+ ffffffff8104edcc: test %r10d,%r10d
+ ffffffff8104edcf: js ffffffff8104ee88 <__dequeue_signal+0x168>
+ ffffffff8104edd5: mov %rax,%rdx
+ ffffffff8104edd8: mov $0xc,%ecx
+ ffffffff8104eddd: mov %r13,%rdi
+ ffffffff8104ede0: mov $0x30,%eax
+ ffffffff8104ede5: mov %rdx,%rsi
+ ffffffff8104ede8: rep movsl %ds:(%rsi),%es:(%rdi)
+ ffffffff8104edea: test $0x2,%al
+ ffffffff8104edec: je ffffffff8104edf0 <__dequeue_signal+0xd0>
+ ffffffff8104edee: movsw %ds:(%rsi),%es:(%rdi)
+ ffffffff8104edf0: test $0x1,%al
+ ffffffff8104edf2: je ffffffff8104edf5 <__dequeue_signal+0xd5>
+ ffffffff8104edf4: movsb %ds:(%rsi),%es:(%rdi)
+ ffffffff8104edf5: mov %r8,%rdi
+ ffffffff8104edf8: callq ffffffff8104de60 <__sigqueue_free>
+
+As expected, it's the "``rep movsl``" instruction from the ``memcpy()``
+that causes the warning. We know about ``REP MOVSL`` that it uses the register
+``RCX`` to count the number of remaining iterations. By taking a look at the
+register dump again (from the kmemcheck report), we can figure out how many
+bytes were left to copy::
+
+ RAX: 0000000000000030 RBX: ffff88003d4ea968 RCX: 0000000000000009
+
+By looking at the disassembly, we also see that ``%ecx`` is being loaded
+with the value ``$0xc`` just before (ffffffff8104edd8), so we are very
+lucky. Keep in mind that this is the number of iterations, not bytes. And
+since this is a "long" operation, we need to multiply by 4 to get the
+number of bytes. So this means that the uninitialized value was encountered
+at 4 * (0xc - 0x9) = 12 bytes from the start of the object.
+
+We can now try to figure out which field of the "``struct siginfo``" that
+was not initialized. This is the beginning of the struct::
+
+ 40 typedef struct siginfo {
+ 41 int si_signo;
+ 42 int si_errno;
+ 43 int si_code;
+ 44
+ 45 union {
+ ..
+ 92 } _sifields;
+ 93 } siginfo_t;
+
+On 64-bit, the int is 4 bytes long, so it must the union member that has
+not been initialized. We can verify this using gdb::
+
+ $ gdb vmlinux
+ ...
+ (gdb) p &((struct siginfo *) 0)->_sifields
+ $1 = (union {...} *) 0x10
+
+Actually, it seems that the union member is located at offset 0x10 -- which
+means that gcc has inserted 4 bytes of padding between the members ``si_code``
+and ``_sifields``. We can now get a fuller picture of the memory dump::
+
+ _----------------------------=> si_code
+ / _--------------------=> (padding)
+ | / _------------=> _sifields(._kill._pid)
+ | | / _----=> _sifields(._kill._uid)
+ | | | /
+ -------|-------|-------|-------|
+ 80000000000000000000000000000000000000000088ffff0000000000000000
+ i i i i u u u u i i i i i i i i u u u u u u u u u u u u u u u u
+
+This allows us to realize another important fact: ``si_code`` contains the
+value 0x80. Remember that x86 is little endian, so the first 4 bytes
+"80000000" are really the number 0x00000080. With a bit of research, we
+find that this is actually the constant ``SI_KERNEL`` defined in
+``include/asm-generic/siginfo.h``::
+
+ 144 #define SI_KERNEL 0x80 /* sent by the kernel from somewhere */
+
+This macro is used in exactly one place in the x86 kernel: In ``send_signal()``
+in ``kernel/signal.c``::
+
+ 816 static int send_signal(int sig, struct siginfo *info, struct task_struct *t,
+ 817 int group)
+ 818 {
+ ...
+ 828 pending = group ? &t->signal->shared_pending : &t->pending;
+ ...
+ 851 q = __sigqueue_alloc(t, GFP_ATOMIC, (sig < SIGRTMIN &&
+ 852 (is_si_special(info) ||
+ 853 info->si_code >= 0)));
+ 854 if (q) {
+ 855 list_add_tail(&q->list, &pending->list);
+ 856 switch ((unsigned long) info) {
+ ...
+ 865 case (unsigned long) SEND_SIG_PRIV:
+ 866 q->info.si_signo = sig;
+ 867 q->info.si_errno = 0;
+ 868 q->info.si_code = SI_KERNEL;
+ 869 q->info.si_pid = 0;
+ 870 q->info.si_uid = 0;
+ 871 break;
+ ...
+ 890 }
+
+Not only does this match with the ``.si_code`` member, it also matches the place
+we found earlier when looking for where siginfo_t objects are enqueued on the
+``shared_pending`` list.
+
+So to sum up: It seems that it is the padding introduced by the compiler
+between two struct fields that is uninitialized, and this gets reported when
+we do a ``memcpy()`` on the struct. This means that we have identified a false
+positive warning.
+
+Normally, kmemcheck will not report uninitialized accesses in ``memcpy()`` calls
+when both the source and destination addresses are tracked. (Instead, we copy
+the shadow bytemap as well). In this case, the destination address clearly
+was not tracked. We can dig a little deeper into the stack trace from above::
+
+ arch/x86/kernel/signal.c:805
+ arch/x86/kernel/signal.c:871
+ arch/x86/kernel/entry_64.S:694
+
+And we clearly see that the destination siginfo object is located on the
+stack::
+
+ 782 static void do_signal(struct pt_regs *regs)
+ 783 {
+ 784 struct k_sigaction ka;
+ 785 siginfo_t info;
+ ...
+ 804 signr = get_signal_to_deliver(&info, &ka, regs, NULL);
+ ...
+ 854 }
+
+And this ``&info`` is what eventually gets passed to ``copy_siginfo()`` as the
+destination argument.
+
+Now, even though we didn't find an actual error here, the example is still a
+good one, because it shows how one would go about to find out what the report
+was all about.
+
+
+Annotating false positives
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There are a few different ways to make annotations in the source code that
+will keep kmemcheck from checking and reporting certain allocations. Here
+they are:
+
+- ``__GFP_NOTRACK_FALSE_POSITIVE``
+ This flag can be passed to ``kmalloc()`` or ``kmem_cache_alloc()``
+ (therefore also to other functions that end up calling one of
+ these) to indicate that the allocation should not be tracked
+ because it would lead to a false positive report. This is a "big
+ hammer" way of silencing kmemcheck; after all, even if the false
+ positive pertains to particular field in a struct, for example, we
+ will now lose the ability to find (real) errors in other parts of
+ the same struct.
+
+ Example::
+
+ /* No warnings will ever trigger on accessing any part of x */
+ x = kmalloc(sizeof *x, GFP_KERNEL | __GFP_NOTRACK_FALSE_POSITIVE);
+
+- ``kmemcheck_bitfield_begin(name)``/``kmemcheck_bitfield_end(name)`` and
+ ``kmemcheck_annotate_bitfield(ptr, name)``
+ The first two of these three macros can be used inside struct
+ definitions to signal, respectively, the beginning and end of a
+ bitfield. Additionally, this will assign the bitfield a name, which
+ is given as an argument to the macros.
+
+ Having used these markers, one can later use
+ kmemcheck_annotate_bitfield() at the point of allocation, to indicate
+ which parts of the allocation is part of a bitfield.
+
+ Example::
+
+ struct foo {
+ int x;
+
+ kmemcheck_bitfield_begin(flags);
+ int flag_a:1;
+ int flag_b:1;
+ kmemcheck_bitfield_end(flags);
+
+ int y;
+ };
+
+ struct foo *x = kmalloc(sizeof *x);
+
+ /* No warnings will trigger on accessing the bitfield of x */
+ kmemcheck_annotate_bitfield(x, flags);
+
+ Note that ``kmemcheck_annotate_bitfield()`` can be used even before the
+ return value of ``kmalloc()`` is checked -- in other words, passing NULL
+ as the first argument is legal (and will do nothing).
+
+
+Reporting errors
+----------------
+
+As we have seen, kmemcheck will produce false positive reports. Therefore, it
+is not very wise to blindly post kmemcheck warnings to mailing lists and
+maintainers. Instead, I encourage maintainers and developers to find errors
+in their own code. If you get a warning, you can try to work around it, try
+to figure out if it's a real error or not, or simply ignore it. Most
+developers know their own code and will quickly and efficiently determine the
+root cause of a kmemcheck report. This is therefore also the most efficient
+way to work with kmemcheck.
+
+That said, we (the kmemcheck maintainers) will always be on the lookout for
+false positives that we can annotate and silence. So whatever you find,
+please drop us a note privately! Kernel configs and steps to reproduce (if
+available) are of course a great help too.
+
+Happy hacking!
+
+
+Technical description
+---------------------
+
+kmemcheck works by marking memory pages non-present. This means that whenever
+somebody attempts to access the page, a page fault is generated. The page
+fault handler notices that the page was in fact only hidden, and so it calls
+on the kmemcheck code to make further investigations.
+
+When the investigations are completed, kmemcheck "shows" the page by marking
+it present (as it would be under normal circumstances). This way, the
+interrupted code can continue as usual.
+
+But after the instruction has been executed, we should hide the page again, so
+that we can catch the next access too! Now kmemcheck makes use of a debugging
+feature of the processor, namely single-stepping. When the processor has
+finished the one instruction that generated the memory access, a debug
+exception is raised. From here, we simply hide the page again and continue
+execution, this time with the single-stepping feature turned off.
+
+kmemcheck requires some assistance from the memory allocator in order to work.
+The memory allocator needs to
+
+ 1. Tell kmemcheck about newly allocated pages and pages that are about to
+ be freed. This allows kmemcheck to set up and tear down the shadow memory
+ for the pages in question. The shadow memory stores the status of each
+ byte in the allocation proper, e.g. whether it is initialized or
+ uninitialized.
+
+ 2. Tell kmemcheck which parts of memory should be marked uninitialized.
+ There are actually a few more states, such as "not yet allocated" and
+ "recently freed".
+
+If a slab cache is set up using the SLAB_NOTRACK flag, it will never return
+memory that can take page faults because of kmemcheck.
+
+If a slab cache is NOT set up using the SLAB_NOTRACK flag, callers can still
+request memory with the __GFP_NOTRACK or __GFP_NOTRACK_FALSE_POSITIVE flags.
+This does not prevent the page faults from occurring, however, but marks the
+object in question as being initialized so that no warnings will ever be
+produced for this object.
+
+Currently, the SLAB and SLUB allocators are supported by kmemcheck.
diff --git a/Documentation/dev-tools/kmemleak.rst b/Documentation/dev-tools/kmemleak.rst
new file mode 100644
index 0000000..b2391b8
--- /dev/null
+++ b/Documentation/dev-tools/kmemleak.rst
@@ -0,0 +1,219 @@
+Kernel Memory Leak Detector
+===========================
+
+Kmemleak provides a way of detecting possible kernel memory leaks in a
+way similar to a tracing garbage collector
+(https://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Tracing_garbage_collectors),
+with the difference that the orphan objects are not freed but only
+reported via /sys/kernel/debug/kmemleak. A similar method is used by the
+Valgrind tool (``memcheck --leak-check``) to detect the memory leaks in
+user-space applications.
+Kmemleak is supported on x86, arm, powerpc, sparc, sh, microblaze, ppc, mips, s390, metag and tile.
+
+Usage
+-----
+
+CONFIG_DEBUG_KMEMLEAK in "Kernel hacking" has to be enabled. A kernel
+thread scans the memory every 10 minutes (by default) and prints the
+number of new unreferenced objects found. To display the details of all
+the possible memory leaks::
+
+ # mount -t debugfs nodev /sys/kernel/debug/
+ # cat /sys/kernel/debug/kmemleak
+
+To trigger an intermediate memory scan::
+
+ # echo scan > /sys/kernel/debug/kmemleak
+
+To clear the list of all current possible memory leaks::
+
+ # echo clear > /sys/kernel/debug/kmemleak
+
+New leaks will then come up upon reading ``/sys/kernel/debug/kmemleak``
+again.
+
+Note that the orphan objects are listed in the order they were allocated
+and one object at the beginning of the list may cause other subsequent
+objects to be reported as orphan.
+
+Memory scanning parameters can be modified at run-time by writing to the
+``/sys/kernel/debug/kmemleak`` file. The following parameters are supported:
+
+- off
+ disable kmemleak (irreversible)
+- stack=on
+ enable the task stacks scanning (default)
+- stack=off
+ disable the tasks stacks scanning
+- scan=on
+ start the automatic memory scanning thread (default)
+- scan=off
+ stop the automatic memory scanning thread
+- scan=<secs>
+ set the automatic memory scanning period in seconds
+ (default 600, 0 to stop the automatic scanning)
+- scan
+ trigger a memory scan
+- clear
+ clear list of current memory leak suspects, done by
+ marking all current reported unreferenced objects grey,
+ or free all kmemleak objects if kmemleak has been disabled.
+- dump=<addr>
+ dump information about the object found at <addr>
+
+Kmemleak can also be disabled at boot-time by passing ``kmemleak=off`` on
+the kernel command line.
+
+Memory may be allocated or freed before kmemleak is initialised and
+these actions are stored in an early log buffer. The size of this buffer
+is configured via the CONFIG_DEBUG_KMEMLEAK_EARLY_LOG_SIZE option.
+
+If CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF are enabled, the kmemleak is
+disabled by default. Passing ``kmemleak=on`` on the kernel command
+line enables the function.
+
+Basic Algorithm
+---------------
+
+The memory allocations via :c:func:`kmalloc`, :c:func:`vmalloc`,
+:c:func:`kmem_cache_alloc` and
+friends are traced and the pointers, together with additional
+information like size and stack trace, are stored in a rbtree.
+The corresponding freeing function calls are tracked and the pointers
+removed from the kmemleak data structures.
+
+An allocated block of memory is considered orphan if no pointer to its
+start address or to any location inside the block can be found by
+scanning the memory (including saved registers). This means that there
+might be no way for the kernel to pass the address of the allocated
+block to a freeing function and therefore the block is considered a
+memory leak.
+
+The scanning algorithm steps:
+
+ 1. mark all objects as white (remaining white objects will later be
+ considered orphan)
+ 2. scan the memory starting with the data section and stacks, checking
+ the values against the addresses stored in the rbtree. If
+ a pointer to a white object is found, the object is added to the
+ gray list
+ 3. scan the gray objects for matching addresses (some white objects
+ can become gray and added at the end of the gray list) until the
+ gray set is finished
+ 4. the remaining white objects are considered orphan and reported via
+ /sys/kernel/debug/kmemleak
+
+Some allocated memory blocks have pointers stored in the kernel's
+internal data structures and they cannot be detected as orphans. To
+avoid this, kmemleak can also store the number of values pointing to an
+address inside the block address range that need to be found so that the
+block is not considered a leak. One example is __vmalloc().
+
+Testing specific sections with kmemleak
+---------------------------------------
+
+Upon initial bootup your /sys/kernel/debug/kmemleak output page may be
+quite extensive. This can also be the case if you have very buggy code
+when doing development. To work around these situations you can use the
+'clear' command to clear all reported unreferenced objects from the
+/sys/kernel/debug/kmemleak output. By issuing a 'scan' after a 'clear'
+you can find new unreferenced objects; this should help with testing
+specific sections of code.
+
+To test a critical section on demand with a clean kmemleak do::
+
+ # echo clear > /sys/kernel/debug/kmemleak
+ ... test your kernel or modules ...
+ # echo scan > /sys/kernel/debug/kmemleak
+
+Then as usual to get your report with::
+
+ # cat /sys/kernel/debug/kmemleak
+
+Freeing kmemleak internal objects
+---------------------------------
+
+To allow access to previously found memory leaks after kmemleak has been
+disabled by the user or due to an fatal error, internal kmemleak objects
+won't be freed when kmemleak is disabled, and those objects may occupy
+a large part of physical memory.
+
+In this situation, you may reclaim memory with::
+
+ # echo clear > /sys/kernel/debug/kmemleak
+
+Kmemleak API
+------------
+
+See the include/linux/kmemleak.h header for the functions prototype.
+
+- ``kmemleak_init`` - initialize kmemleak
+- ``kmemleak_alloc`` - notify of a memory block allocation
+- ``kmemleak_alloc_percpu`` - notify of a percpu memory block allocation
+- ``kmemleak_free`` - notify of a memory block freeing
+- ``kmemleak_free_part`` - notify of a partial memory block freeing
+- ``kmemleak_free_percpu`` - notify of a percpu memory block freeing
+- ``kmemleak_update_trace`` - update object allocation stack trace
+- ``kmemleak_not_leak`` - mark an object as not a leak
+- ``kmemleak_ignore`` - do not scan or report an object as leak
+- ``kmemleak_scan_area`` - add scan areas inside a memory block
+- ``kmemleak_no_scan`` - do not scan a memory block
+- ``kmemleak_erase`` - erase an old value in a pointer variable
+- ``kmemleak_alloc_recursive`` - as kmemleak_alloc but checks the recursiveness
+- ``kmemleak_free_recursive`` - as kmemleak_free but checks the recursiveness
+
+The following functions take a physical address as the object pointer
+and only perform the corresponding action if the address has a lowmem
+mapping:
+
+- ``kmemleak_alloc_phys``
+- ``kmemleak_free_part_phys``
+- ``kmemleak_not_leak_phys``
+- ``kmemleak_ignore_phys``
+
+Dealing with false positives/negatives
+--------------------------------------
+
+The false negatives are real memory leaks (orphan objects) but not
+reported by kmemleak because values found during the memory scanning
+point to such objects. To reduce the number of false negatives, kmemleak
+provides the kmemleak_ignore, kmemleak_scan_area, kmemleak_no_scan and
+kmemleak_erase functions (see above). The task stacks also increase the
+amount of false negatives and their scanning is not enabled by default.
+
+The false positives are objects wrongly reported as being memory leaks
+(orphan). For objects known not to be leaks, kmemleak provides the
+kmemleak_not_leak function. The kmemleak_ignore could also be used if
+the memory block is known not to contain other pointers and it will no
+longer be scanned.
+
+Some of the reported leaks are only transient, especially on SMP
+systems, because of pointers temporarily stored in CPU registers or
+stacks. Kmemleak defines MSECS_MIN_AGE (defaulting to 1000) representing
+the minimum age of an object to be reported as a memory leak.
+
+Limitations and Drawbacks
+-------------------------
+
+The main drawback is the reduced performance of memory allocation and
+freeing. To avoid other penalties, the memory scanning is only performed
+when the /sys/kernel/debug/kmemleak file is read. Anyway, this tool is
+intended for debugging purposes where the performance might not be the
+most important requirement.
+
+To keep the algorithm simple, kmemleak scans for values pointing to any
+address inside a block's address range. This may lead to an increased
+number of false negatives. However, it is likely that a real memory leak
+will eventually become visible.
+
+Another source of false negatives is the data stored in non-pointer
+values. In a future version, kmemleak could only scan the pointer
+members in the allocated structures. This feature would solve many of
+the false negative cases described above.
+
+The tool can report false positives. These are cases where an allocated
+block doesn't need to be freed (some cases in the init_call functions),
+the pointer is calculated by other methods than the usual container_of
+macro or the pointer is stored in a location not scanned by kmemleak.
+
+Page allocations and ioremap are not tracked.
diff --git a/Documentation/dev-tools/sparse.rst b/Documentation/dev-tools/sparse.rst
new file mode 100644
index 0000000..8c250e8
--- /dev/null
+++ b/Documentation/dev-tools/sparse.rst
@@ -0,0 +1,117 @@
+.. Copyright 2004 Linus Torvalds
+.. Copyright 2004 Pavel Machek <pavel@ucw.cz>
+.. Copyright 2006 Bob Copeland <me@bobcopeland.com>
+
+Sparse
+======
+
+Sparse is a semantic checker for C programs; it can be used to find a
+number of potential problems with kernel code. See
+https://lwn.net/Articles/689907/ for an overview of sparse; this document
+contains some kernel-specific sparse information.
+
+
+Using sparse for typechecking
+-----------------------------
+
+"__bitwise" is a type attribute, so you have to do something like this::
+
+ typedef int __bitwise pm_request_t;
+
+ enum pm_request {
+ PM_SUSPEND = (__force pm_request_t) 1,
+ PM_RESUME = (__force pm_request_t) 2
+ };
+
+which makes PM_SUSPEND and PM_RESUME "bitwise" integers (the "__force" is
+there because sparse will complain about casting to/from a bitwise type,
+but in this case we really _do_ want to force the conversion). And because
+the enum values are all the same type, now "enum pm_request" will be that
+type too.
+
+And with gcc, all the "__bitwise"/"__force stuff" goes away, and it all
+ends up looking just like integers to gcc.
+
+Quite frankly, you don't need the enum there. The above all really just
+boils down to one special "int __bitwise" type.
+
+So the simpler way is to just do::
+
+ typedef int __bitwise pm_request_t;
+
+ #define PM_SUSPEND ((__force pm_request_t) 1)
+ #define PM_RESUME ((__force pm_request_t) 2)
+
+and you now have all the infrastructure needed for strict typechecking.
+
+One small note: the constant integer "0" is special. You can use a
+constant zero as a bitwise integer type without sparse ever complaining.
+This is because "bitwise" (as the name implies) was designed for making
+sure that bitwise types don't get mixed up (little-endian vs big-endian
+vs cpu-endian vs whatever), and there the constant "0" really _is_
+special.
+
+__bitwise__ - to be used for relatively compact stuff (gfp_t, etc.) that
+is mostly warning-free and is supposed to stay that way. Warnings will
+be generated without __CHECK_ENDIAN__.
+
+__bitwise - noisy stuff; in particular, __le*/__be* are that. We really
+don't want to drown in noise unless we'd explicitly asked for it.
+
+Using sparse for lock checking
+------------------------------
+
+The following macros are undefined for gcc and defined during a sparse
+run to use the "context" tracking feature of sparse, applied to
+locking. These annotations tell sparse when a lock is held, with
+regard to the annotated function's entry and exit.
+
+__must_hold - The specified lock is held on function entry and exit.
+
+__acquires - The specified lock is held on function exit, but not entry.
+
+__releases - The specified lock is held on function entry, but not exit.
+
+If the function enters and exits without the lock held, acquiring and
+releasing the lock inside the function in a balanced way, no
+annotation is needed. The tree annotations above are for cases where
+sparse would otherwise report a context imbalance.
+
+Getting sparse
+--------------
+
+You can get latest released versions from the Sparse homepage at
+https://sparse.wiki.kernel.org/index.php/Main_Page
+
+Alternatively, you can get snapshots of the latest development version
+of sparse using git to clone::
+
+ git://git.kernel.org/pub/scm/devel/sparse/sparse.git
+
+DaveJ has hourly generated tarballs of the git tree available at::
+
+ http://www.codemonkey.org.uk/projects/git-snapshots/sparse/
+
+
+Once you have it, just do::
+
+ make
+ make install
+
+as a regular user, and it will install sparse in your ~/bin directory.
+
+Using sparse
+------------
+
+Do a kernel make with "make C=1" to run sparse on all the C files that get
+recompiled, or use "make C=2" to run sparse on the files whether they need to
+be recompiled or not. The latter is a fast way to check the whole tree if you
+have already built it.
+
+The optional make variable CF can be used to pass arguments to sparse. The
+build system passes -Wbitwise to sparse automatically. To perform endianness
+checks, you may define __CHECK_ENDIAN__::
+
+ make C=2 CF="-D__CHECK_ENDIAN__"
+
+These checks are disabled by default as they generate a host of warnings.
diff --git a/Documentation/dev-tools/tools.rst b/Documentation/dev-tools/tools.rst
new file mode 100644
index 0000000..824ae8e
--- /dev/null
+++ b/Documentation/dev-tools/tools.rst
@@ -0,0 +1,25 @@
+================================
+Development tools for the kernel
+================================
+
+This document is a collection of documents about development tools that can
+be used to work on the kernel. For now, the documents have been pulled
+together without any significant effot to integrate them into a coherent
+whole; patches welcome!
+
+.. class:: toc-title
+
+ Table of contents
+
+.. toctree::
+ :maxdepth: 2
+
+ coccinelle
+ sparse
+ kcov
+ gcov
+ kasan
+ ubsan
+ kmemleak
+ kmemcheck
+ gdb-kernel-debugging
diff --git a/Documentation/dev-tools/ubsan.rst b/Documentation/dev-tools/ubsan.rst
new file mode 100644
index 0000000..655e6b6
--- /dev/null
+++ b/Documentation/dev-tools/ubsan.rst
@@ -0,0 +1,88 @@
+The Undefined Behavior Sanitizer - UBSAN
+========================================
+
+UBSAN is a runtime undefined behaviour checker.
+
+UBSAN uses compile-time instrumentation to catch undefined behavior (UB).
+Compiler inserts code that perform certain kinds of checks before operations
+that may cause UB. If check fails (i.e. UB detected) __ubsan_handle_*
+function called to print error message.
+
+GCC has that feature since 4.9.x [1_] (see ``-fsanitize=undefined`` option and
+its suboptions). GCC 5.x has more checkers implemented [2_].
+
+Report example
+--------------
+
+::
+
+ ================================================================================
+ UBSAN: Undefined behaviour in ../include/linux/bitops.h:110:33
+ shift exponent 32 is to large for 32-bit type 'unsigned int'
+ CPU: 0 PID: 0 Comm: swapper Not tainted 4.4.0-rc1+ #26
+ 0000000000000000 ffffffff82403cc8 ffffffff815e6cd6 0000000000000001
+ ffffffff82403cf8 ffffffff82403ce0 ffffffff8163a5ed 0000000000000020
+ ffffffff82403d78 ffffffff8163ac2b ffffffff815f0001 0000000000000002
+ Call Trace:
+ [<ffffffff815e6cd6>] dump_stack+0x45/0x5f
+ [<ffffffff8163a5ed>] ubsan_epilogue+0xd/0x40
+ [<ffffffff8163ac2b>] __ubsan_handle_shift_out_of_bounds+0xeb/0x130
+ [<ffffffff815f0001>] ? radix_tree_gang_lookup_slot+0x51/0x150
+ [<ffffffff8173c586>] _mix_pool_bytes+0x1e6/0x480
+ [<ffffffff83105653>] ? dmi_walk_early+0x48/0x5c
+ [<ffffffff8173c881>] add_device_randomness+0x61/0x130
+ [<ffffffff83105b35>] ? dmi_save_one_device+0xaa/0xaa
+ [<ffffffff83105653>] dmi_walk_early+0x48/0x5c
+ [<ffffffff831066ae>] dmi_scan_machine+0x278/0x4b4
+ [<ffffffff8111d58a>] ? vprintk_default+0x1a/0x20
+ [<ffffffff830ad120>] ? early_idt_handler_array+0x120/0x120
+ [<ffffffff830b2240>] setup_arch+0x405/0xc2c
+ [<ffffffff830ad120>] ? early_idt_handler_array+0x120/0x120
+ [<ffffffff830ae053>] start_kernel+0x83/0x49a
+ [<ffffffff830ad120>] ? early_idt_handler_array+0x120/0x120
+ [<ffffffff830ad386>] x86_64_start_reservations+0x2a/0x2c
+ [<ffffffff830ad4f3>] x86_64_start_kernel+0x16b/0x17a
+ ================================================================================
+
+Usage
+-----
+
+To enable UBSAN configure kernel with::
+
+ CONFIG_UBSAN=y
+
+and to check the entire kernel::
+
+ CONFIG_UBSAN_SANITIZE_ALL=y
+
+To enable instrumentation for specific files or directories, add a line
+similar to the following to the respective kernel Makefile:
+
+- For a single file (e.g. main.o)::
+
+ UBSAN_SANITIZE_main.o := y
+
+- For all files in one directory::
+
+ UBSAN_SANITIZE := y
+
+To exclude files from being instrumented even if
+``CONFIG_UBSAN_SANITIZE_ALL=y``, use::
+
+ UBSAN_SANITIZE_main.o := n
+
+and::
+
+ UBSAN_SANITIZE := n
+
+Detection of unaligned accesses controlled through the separate option -
+CONFIG_UBSAN_ALIGNMENT. It's off by default on architectures that support
+unaligned accesses (CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y). One could
+still enable it in config, just note that it will produce a lot of UBSAN
+reports.
+
+References
+----------
+
+.. _1: https://gcc.gnu.org/onlinedocs/gcc-4.9.0/gcc/Debugging-Options.html
+.. _2: https://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html
diff --git a/Documentation/development-process/1.Intro b/Documentation/development-process/1.Intro
deleted file mode 100644
index 9b61448..0000000
--- a/Documentation/development-process/1.Intro
+++ /dev/null
@@ -1,274 +0,0 @@
-1: A GUIDE TO THE KERNEL DEVELOPMENT PROCESS
-
-The purpose of this document is to help developers (and their managers)
-work with the development community with a minimum of frustration. It is
-an attempt to document how this community works in a way which is
-accessible to those who are not intimately familiar with Linux kernel
-development (or, indeed, free software development in general). While
-there is some technical material here, this is very much a process-oriented
-discussion which does not require a deep knowledge of kernel programming to
-understand.
-
-
-1.1: EXECUTIVE SUMMARY
-
-The rest of this section covers the scope of the kernel development process
-and the kinds of frustrations that developers and their employers can
-encounter there. There are a great many reasons why kernel code should be
-merged into the official ("mainline") kernel, including automatic
-availability to users, community support in many forms, and the ability to
-influence the direction of kernel development. Code contributed to the
-Linux kernel must be made available under a GPL-compatible license.
-
-Section 2 introduces the development process, the kernel release cycle, and
-the mechanics of the merge window. The various phases in the patch
-development, review, and merging cycle are covered. There is some
-discussion of tools and mailing lists. Developers wanting to get started
-with kernel development are encouraged to track down and fix bugs as an
-initial exercise.
-
-Section 3 covers early-stage project planning, with an emphasis on
-involving the development community as soon as possible.
-
-Section 4 is about the coding process; several pitfalls which have been
-encountered by other developers are discussed. Some requirements for
-patches are covered, and there is an introduction to some of the tools
-which can help to ensure that kernel patches are correct.
-
-Section 5 talks about the process of posting patches for review. To be
-taken seriously by the development community, patches must be properly
-formatted and described, and they must be sent to the right place.
-Following the advice in this section should help to ensure the best
-possible reception for your work.
-
-Section 6 covers what happens after posting patches; the job is far from
-done at that point. Working with reviewers is a crucial part of the
-development process; this section offers a number of tips on how to avoid
-problems at this important stage. Developers are cautioned against
-assuming that the job is done when a patch is merged into the mainline.
-
-Section 7 introduces a couple of "advanced" topics: managing patches with
-git and reviewing patches posted by others.
-
-Section 8 concludes the document with pointers to sources for more
-information on kernel development.
-
-
-1.2: WHAT THIS DOCUMENT IS ABOUT
-
-The Linux kernel, at over 8 million lines of code and well over 1000
-contributors to each release, is one of the largest and most active free
-software projects in existence. Since its humble beginning in 1991, this
-kernel has evolved into a best-of-breed operating system component which
-runs on pocket-sized digital music players, desktop PCs, the largest
-supercomputers in existence, and all types of systems in between. It is a
-robust, efficient, and scalable solution for almost any situation.
-
-With the growth of Linux has come an increase in the number of developers
-(and companies) wishing to participate in its development. Hardware
-vendors want to ensure that Linux supports their products well, making
-those products attractive to Linux users. Embedded systems vendors, who
-use Linux as a component in an integrated product, want Linux to be as
-capable and well-suited to the task at hand as possible. Distributors and
-other software vendors who base their products on Linux have a clear
-interest in the capabilities, performance, and reliability of the Linux
-kernel. And end users, too, will often wish to change Linux to make it
-better suit their needs.
-
-One of the most compelling features of Linux is that it is accessible to
-these developers; anybody with the requisite skills can improve Linux and
-influence the direction of its development. Proprietary products cannot
-offer this kind of openness, which is a characteristic of the free software
-process. But, if anything, the kernel is even more open than most other
-free software projects. A typical three-month kernel development cycle can
-involve over 1000 developers working for more than 100 different companies
-(or for no company at all).
-
-Working with the kernel development community is not especially hard. But,
-that notwithstanding, many potential contributors have experienced
-difficulties when trying to do kernel work. The kernel community has
-evolved its own distinct ways of operating which allow it to function
-smoothly (and produce a high-quality product) in an environment where
-thousands of lines of code are being changed every day. So it is not
-surprising that Linux kernel development process differs greatly from
-proprietary development methods.
-
-The kernel's development process may come across as strange and
-intimidating to new developers, but there are good reasons and solid
-experience behind it. A developer who does not understand the kernel
-community's ways (or, worse, who tries to flout or circumvent them) will
-have a frustrating experience in store. The development community, while
-being helpful to those who are trying to learn, has little time for those
-who will not listen or who do not care about the development process.
-
-It is hoped that those who read this document will be able to avoid that
-frustrating experience. There is a lot of material here, but the effort
-involved in reading it will be repaid in short order. The development
-community is always in need of developers who will help to make the kernel
-better; the following text should help you - or those who work for you -
-join our community.
-
-
-1.3: CREDITS
-
-This document was written by Jonathan Corbet, corbet@lwn.net. It has been
-improved by comments from Johannes Berg, James Berry, Alex Chiang, Roland
-Dreier, Randy Dunlap, Jake Edge, Jiri Kosina, Matt Mackall, Arthur Marsh,
-Amanda McPherson, Andrew Morton, Andrew Price, Tsugikazu Shibata, and
-Jochen Voß.
-
-This work was supported by the Linux Foundation; thanks especially to
-Amanda McPherson, who saw the value of this effort and made it all happen.
-
-
-1.4: THE IMPORTANCE OF GETTING CODE INTO THE MAINLINE
-
-Some companies and developers occasionally wonder why they should bother
-learning how to work with the kernel community and get their code into the
-mainline kernel (the "mainline" being the kernel maintained by Linus
-Torvalds and used as a base by Linux distributors). In the short term,
-contributing code can look like an avoidable expense; it seems easier to
-just keep the code separate and support users directly. The truth of the
-matter is that keeping code separate ("out of tree") is a false economy.
-
-As a way of illustrating the costs of out-of-tree code, here are a few
-relevant aspects of the kernel development process; most of these will be
-discussed in greater detail later in this document. Consider:
-
-- Code which has been merged into the mainline kernel is available to all
- Linux users. It will automatically be present on all distributions which
- enable it. There is no need for driver disks, downloads, or the hassles
- of supporting multiple versions of multiple distributions; it all just
- works, for the developer and for the user. Incorporation into the
- mainline solves a large number of distribution and support problems.
-
-- While kernel developers strive to maintain a stable interface to user
- space, the internal kernel API is in constant flux. The lack of a stable
- internal interface is a deliberate design decision; it allows fundamental
- improvements to be made at any time and results in higher-quality code.
- But one result of that policy is that any out-of-tree code requires
- constant upkeep if it is to work with new kernels. Maintaining
- out-of-tree code requires significant amounts of work just to keep that
- code working.
-
- Code which is in the mainline, instead, does not require this work as the
- result of a simple rule requiring any developer who makes an API change
- to also fix any code that breaks as the result of that change. So code
- which has been merged into the mainline has significantly lower
- maintenance costs.
-
-- Beyond that, code which is in the kernel will often be improved by other
- developers. Surprising results can come from empowering your user
- community and customers to improve your product.
-
-- Kernel code is subjected to review, both before and after merging into
- the mainline. No matter how strong the original developer's skills are,
- this review process invariably finds ways in which the code can be
- improved. Often review finds severe bugs and security problems. This is
- especially true for code which has been developed in a closed
- environment; such code benefits strongly from review by outside
- developers. Out-of-tree code is lower-quality code.
-
-- Participation in the development process is your way to influence the
- direction of kernel development. Users who complain from the sidelines
- are heard, but active developers have a stronger voice - and the ability
- to implement changes which make the kernel work better for their needs.
-
-- When code is maintained separately, the possibility that a third party
- will contribute a different implementation of a similar feature always
- exists. Should that happen, getting your code merged will become much
- harder - to the point of impossibility. Then you will be faced with the
- unpleasant alternatives of either (1) maintaining a nonstandard feature
- out of tree indefinitely, or (2) abandoning your code and migrating your
- users over to the in-tree version.
-
-- Contribution of code is the fundamental action which makes the whole
- process work. By contributing your code you can add new functionality to
- the kernel and provide capabilities and examples which are of use to
- other kernel developers. If you have developed code for Linux (or are
- thinking about doing so), you clearly have an interest in the continued
- success of this platform; contributing code is one of the best ways to
- help ensure that success.
-
-All of the reasoning above applies to any out-of-tree kernel code,
-including code which is distributed in proprietary, binary-only form.
-There are, however, additional factors which should be taken into account
-before considering any sort of binary-only kernel code distribution. These
-include:
-
-- The legal issues around the distribution of proprietary kernel modules
- are cloudy at best; quite a few kernel copyright holders believe that
- most binary-only modules are derived products of the kernel and that, as
- a result, their distribution is a violation of the GNU General Public
- license (about which more will be said below). Your author is not a
- lawyer, and nothing in this document can possibly be considered to be
- legal advice. The true legal status of closed-source modules can only be
- determined by the courts. But the uncertainty which haunts those modules
- is there regardless.
-
-- Binary modules greatly increase the difficulty of debugging kernel
- problems, to the point that most kernel developers will not even try. So
- the distribution of binary-only modules will make it harder for your
- users to get support from the community.
-
-- Support is also harder for distributors of binary-only modules, who must
- provide a version of the module for every distribution and every kernel
- version they wish to support. Dozens of builds of a single module can
- be required to provide reasonably comprehensive coverage, and your users
- will have to upgrade your module separately every time they upgrade their
- kernel.
-
-- Everything that was said above about code review applies doubly to
- closed-source code. Since this code is not available at all, it cannot
- have been reviewed by the community and will, beyond doubt, have serious
- problems.
-
-Makers of embedded systems, in particular, may be tempted to disregard much
-of what has been said in this section in the belief that they are shipping
-a self-contained product which uses a frozen kernel version and requires no
-more development after its release. This argument misses the value of
-widespread code review and the value of allowing your users to add
-capabilities to your product. But these products, too, have a limited
-commercial life, after which a new version must be released. At that
-point, vendors whose code is in the mainline and well maintained will be
-much better positioned to get the new product ready for market quickly.
-
-
-1.5: LICENSING
-
-Code is contributed to the Linux kernel under a number of licenses, but all
-code must be compatible with version 2 of the GNU General Public License
-(GPLv2), which is the license covering the kernel distribution as a whole.
-In practice, that means that all code contributions are covered either by
-GPLv2 (with, optionally, language allowing distribution under later
-versions of the GPL) or the three-clause BSD license. Any contributions
-which are not covered by a compatible license will not be accepted into the
-kernel.
-
-Copyright assignments are not required (or requested) for code contributed
-to the kernel. All code merged into the mainline kernel retains its
-original ownership; as a result, the kernel now has thousands of owners.
-
-One implication of this ownership structure is that any attempt to change
-the licensing of the kernel is doomed to almost certain failure. There are
-few practical scenarios where the agreement of all copyright holders could
-be obtained (or their code removed from the kernel). So, in particular,
-there is no prospect of a migration to version 3 of the GPL in the
-foreseeable future.
-
-It is imperative that all code contributed to the kernel be legitimately
-free software. For that reason, code from anonymous (or pseudonymous)
-contributors will not be accepted. All contributors are required to "sign
-off" on their code, stating that the code can be distributed with the
-kernel under the GPL. Code which has not been licensed as free software by
-its owner, or which risks creating copyright-related problems for the
-kernel (such as code which derives from reverse-engineering efforts lacking
-proper safeguards) cannot be contributed.
-
-Questions about copyright-related issues are common on Linux development
-mailing lists. Such questions will normally receive no shortage of
-answers, but one should bear in mind that the people answering those
-questions are not lawyers and cannot provide legal advice. If you have
-legal questions relating to Linux source code, there is no substitute for
-talking with a lawyer who understands this field. Relying on answers
-obtained on technical mailing lists is a risky affair.
diff --git a/Documentation/development-process/1.Intro.rst b/Documentation/development-process/1.Intro.rst
new file mode 100644
index 0000000..22642b3
--- /dev/null
+++ b/Documentation/development-process/1.Intro.rst
@@ -0,0 +1,266 @@
+Introdution
+===========
+
+Executive summary
+-----------------
+
+The rest of this section covers the scope of the kernel development process
+and the kinds of frustrations that developers and their employers can
+encounter there. There are a great many reasons why kernel code should be
+merged into the official ("mainline") kernel, including automatic
+availability to users, community support in many forms, and the ability to
+influence the direction of kernel development. Code contributed to the
+Linux kernel must be made available under a GPL-compatible license.
+
+:ref:`development_process` introduces the development process, the kernel
+release cycle, and the mechanics of the merge window. The various phases in
+the patch development, review, and merging cycle are covered. There is some
+discussion of tools and mailing lists. Developers wanting to get started
+with kernel development are encouraged to track down and fix bugs as an
+initial exercise.
+
+:ref:`development_early_stage` covers early-stage project planning, with an
+emphasis on involving the development community as soon as possible.
+
+:ref:`development_coding` is about the coding process; several pitfalls which
+have been encountered by other developers are discussed. Some requirements for
+patches are covered, and there is an introduction to some of the tools
+which can help to ensure that kernel patches are correct.
+
+:ref:`development_posting` talks about the process of posting patches for
+review. To be taken seriously by the development community, patches must be
+properly formatted and described, and they must be sent to the right place.
+Following the advice in this section should help to ensure the best
+possible reception for your work.
+
+:ref:`development_followthrough` covers what happens after posting patches; the
+job is far from done at that point. Working with reviewers is a crucial part
+of the development process; this section offers a number of tips on how to
+avoid problems at this important stage. Developers are cautioned against
+assuming that the job is done when a patch is merged into the mainline.
+
+:ref:`development_advancedtopics` introduces a couple of "advanced" topics:
+managing patches with git and reviewing patches posted by others.
+
+:ref:`development_conclusion` concludes the document with pointers to sources
+for more information on kernel development.
+
+What this document is about
+---------------------------
+
+The Linux kernel, at over 8 million lines of code and well over 1000
+contributors to each release, is one of the largest and most active free
+software projects in existence. Since its humble beginning in 1991, this
+kernel has evolved into a best-of-breed operating system component which
+runs on pocket-sized digital music players, desktop PCs, the largest
+supercomputers in existence, and all types of systems in between. It is a
+robust, efficient, and scalable solution for almost any situation.
+
+With the growth of Linux has come an increase in the number of developers
+(and companies) wishing to participate in its development. Hardware
+vendors want to ensure that Linux supports their products well, making
+those products attractive to Linux users. Embedded systems vendors, who
+use Linux as a component in an integrated product, want Linux to be as
+capable and well-suited to the task at hand as possible. Distributors and
+other software vendors who base their products on Linux have a clear
+interest in the capabilities, performance, and reliability of the Linux
+kernel. And end users, too, will often wish to change Linux to make it
+better suit their needs.
+
+One of the most compelling features of Linux is that it is accessible to
+these developers; anybody with the requisite skills can improve Linux and
+influence the direction of its development. Proprietary products cannot
+offer this kind of openness, which is a characteristic of the free software
+process. But, if anything, the kernel is even more open than most other
+free software projects. A typical three-month kernel development cycle can
+involve over 1000 developers working for more than 100 different companies
+(or for no company at all).
+
+Working with the kernel development community is not especially hard. But,
+that notwithstanding, many potential contributors have experienced
+difficulties when trying to do kernel work. The kernel community has
+evolved its own distinct ways of operating which allow it to function
+smoothly (and produce a high-quality product) in an environment where
+thousands of lines of code are being changed every day. So it is not
+surprising that Linux kernel development process differs greatly from
+proprietary development methods.
+
+The kernel's development process may come across as strange and
+intimidating to new developers, but there are good reasons and solid
+experience behind it. A developer who does not understand the kernel
+community's ways (or, worse, who tries to flout or circumvent them) will
+have a frustrating experience in store. The development community, while
+being helpful to those who are trying to learn, has little time for those
+who will not listen or who do not care about the development process.
+
+It is hoped that those who read this document will be able to avoid that
+frustrating experience. There is a lot of material here, but the effort
+involved in reading it will be repaid in short order. The development
+community is always in need of developers who will help to make the kernel
+better; the following text should help you - or those who work for you -
+join our community.
+
+Credits
+-------
+
+This document was written by Jonathan Corbet, corbet@lwn.net. It has been
+improved by comments from Johannes Berg, James Berry, Alex Chiang, Roland
+Dreier, Randy Dunlap, Jake Edge, Jiri Kosina, Matt Mackall, Arthur Marsh,
+Amanda McPherson, Andrew Morton, Andrew Price, Tsugikazu Shibata, and
+Jochen Voß.
+
+This work was supported by the Linux Foundation; thanks especially to
+Amanda McPherson, who saw the value of this effort and made it all happen.
+
+The importance of getting code into the mainline
+------------------------------------------------
+
+Some companies and developers occasionally wonder why they should bother
+learning how to work with the kernel community and get their code into the
+mainline kernel (the "mainline" being the kernel maintained by Linus
+Torvalds and used as a base by Linux distributors). In the short term,
+contributing code can look like an avoidable expense; it seems easier to
+just keep the code separate and support users directly. The truth of the
+matter is that keeping code separate ("out of tree") is a false economy.
+
+As a way of illustrating the costs of out-of-tree code, here are a few
+relevant aspects of the kernel development process; most of these will be
+discussed in greater detail later in this document. Consider:
+
+- Code which has been merged into the mainline kernel is available to all
+ Linux users. It will automatically be present on all distributions which
+ enable it. There is no need for driver disks, downloads, or the hassles
+ of supporting multiple versions of multiple distributions; it all just
+ works, for the developer and for the user. Incorporation into the
+ mainline solves a large number of distribution and support problems.
+
+- While kernel developers strive to maintain a stable interface to user
+ space, the internal kernel API is in constant flux. The lack of a stable
+ internal interface is a deliberate design decision; it allows fundamental
+ improvements to be made at any time and results in higher-quality code.
+ But one result of that policy is that any out-of-tree code requires
+ constant upkeep if it is to work with new kernels. Maintaining
+ out-of-tree code requires significant amounts of work just to keep that
+ code working.
+
+ Code which is in the mainline, instead, does not require this work as the
+ result of a simple rule requiring any developer who makes an API change
+ to also fix any code that breaks as the result of that change. So code
+ which has been merged into the mainline has significantly lower
+ maintenance costs.
+
+- Beyond that, code which is in the kernel will often be improved by other
+ developers. Surprising results can come from empowering your user
+ community and customers to improve your product.
+
+- Kernel code is subjected to review, both before and after merging into
+ the mainline. No matter how strong the original developer's skills are,
+ this review process invariably finds ways in which the code can be
+ improved. Often review finds severe bugs and security problems. This is
+ especially true for code which has been developed in a closed
+ environment; such code benefits strongly from review by outside
+ developers. Out-of-tree code is lower-quality code.
+
+- Participation in the development process is your way to influence the
+ direction of kernel development. Users who complain from the sidelines
+ are heard, but active developers have a stronger voice - and the ability
+ to implement changes which make the kernel work better for their needs.
+
+- When code is maintained separately, the possibility that a third party
+ will contribute a different implementation of a similar feature always
+ exists. Should that happen, getting your code merged will become much
+ harder - to the point of impossibility. Then you will be faced with the
+ unpleasant alternatives of either (1) maintaining a nonstandard feature
+ out of tree indefinitely, or (2) abandoning your code and migrating your
+ users over to the in-tree version.
+
+- Contribution of code is the fundamental action which makes the whole
+ process work. By contributing your code you can add new functionality to
+ the kernel and provide capabilities and examples which are of use to
+ other kernel developers. If you have developed code for Linux (or are
+ thinking about doing so), you clearly have an interest in the continued
+ success of this platform; contributing code is one of the best ways to
+ help ensure that success.
+
+All of the reasoning above applies to any out-of-tree kernel code,
+including code which is distributed in proprietary, binary-only form.
+There are, however, additional factors which should be taken into account
+before considering any sort of binary-only kernel code distribution. These
+include:
+
+- The legal issues around the distribution of proprietary kernel modules
+ are cloudy at best; quite a few kernel copyright holders believe that
+ most binary-only modules are derived products of the kernel and that, as
+ a result, their distribution is a violation of the GNU General Public
+ license (about which more will be said below). Your author is not a
+ lawyer, and nothing in this document can possibly be considered to be
+ legal advice. The true legal status of closed-source modules can only be
+ determined by the courts. But the uncertainty which haunts those modules
+ is there regardless.
+
+- Binary modules greatly increase the difficulty of debugging kernel
+ problems, to the point that most kernel developers will not even try. So
+ the distribution of binary-only modules will make it harder for your
+ users to get support from the community.
+
+- Support is also harder for distributors of binary-only modules, who must
+ provide a version of the module for every distribution and every kernel
+ version they wish to support. Dozens of builds of a single module can
+ be required to provide reasonably comprehensive coverage, and your users
+ will have to upgrade your module separately every time they upgrade their
+ kernel.
+
+- Everything that was said above about code review applies doubly to
+ closed-source code. Since this code is not available at all, it cannot
+ have been reviewed by the community and will, beyond doubt, have serious
+ problems.
+
+Makers of embedded systems, in particular, may be tempted to disregard much
+of what has been said in this section in the belief that they are shipping
+a self-contained product which uses a frozen kernel version and requires no
+more development after its release. This argument misses the value of
+widespread code review and the value of allowing your users to add
+capabilities to your product. But these products, too, have a limited
+commercial life, after which a new version must be released. At that
+point, vendors whose code is in the mainline and well maintained will be
+much better positioned to get the new product ready for market quickly.
+
+Licensing
+---------
+
+Code is contributed to the Linux kernel under a number of licenses, but all
+code must be compatible with version 2 of the GNU General Public License
+(GPLv2), which is the license covering the kernel distribution as a whole.
+In practice, that means that all code contributions are covered either by
+GPLv2 (with, optionally, language allowing distribution under later
+versions of the GPL) or the three-clause BSD license. Any contributions
+which are not covered by a compatible license will not be accepted into the
+kernel.
+
+Copyright assignments are not required (or requested) for code contributed
+to the kernel. All code merged into the mainline kernel retains its
+original ownership; as a result, the kernel now has thousands of owners.
+
+One implication of this ownership structure is that any attempt to change
+the licensing of the kernel is doomed to almost certain failure. There are
+few practical scenarios where the agreement of all copyright holders could
+be obtained (or their code removed from the kernel). So, in particular,
+there is no prospect of a migration to version 3 of the GPL in the
+foreseeable future.
+
+It is imperative that all code contributed to the kernel be legitimately
+free software. For that reason, code from anonymous (or pseudonymous)
+contributors will not be accepted. All contributors are required to "sign
+off" on their code, stating that the code can be distributed with the
+kernel under the GPL. Code which has not been licensed as free software by
+its owner, or which risks creating copyright-related problems for the
+kernel (such as code which derives from reverse-engineering efforts lacking
+proper safeguards) cannot be contributed.
+
+Questions about copyright-related issues are common on Linux development
+mailing lists. Such questions will normally receive no shortage of
+answers, but one should bear in mind that the people answering those
+questions are not lawyers and cannot provide legal advice. If you have
+legal questions relating to Linux source code, there is no substitute for
+talking with a lawyer who understands this field. Relying on answers
+obtained on technical mailing lists is a risky affair.
diff --git a/Documentation/development-process/2.Process b/Documentation/development-process/2.Process
deleted file mode 100644
index c24e156..0000000
--- a/Documentation/development-process/2.Process
+++ /dev/null
@@ -1,478 +0,0 @@
-2: HOW THE DEVELOPMENT PROCESS WORKS
-
-Linux kernel development in the early 1990's was a pretty loose affair,
-with relatively small numbers of users and developers involved. With a
-user base in the millions and with some 2,000 developers involved over the
-course of one year, the kernel has since had to evolve a number of
-processes to keep development happening smoothly. A solid understanding of
-how the process works is required in order to be an effective part of it.
-
-
-2.1: THE BIG PICTURE
-
-The kernel developers use a loosely time-based release process, with a new
-major kernel release happening every two or three months. The recent
-release history looks like this:
-
- 2.6.38 March 14, 2011
- 2.6.37 January 4, 2011
- 2.6.36 October 20, 2010
- 2.6.35 August 1, 2010
- 2.6.34 May 15, 2010
- 2.6.33 February 24, 2010
-
-Every 2.6.x release is a major kernel release with new features, internal
-API changes, and more. A typical 2.6 release can contain nearly 10,000
-changesets with changes to several hundred thousand lines of code. 2.6 is
-thus the leading edge of Linux kernel development; the kernel uses a
-rolling development model which is continually integrating major changes.
-
-A relatively straightforward discipline is followed with regard to the
-merging of patches for each release. At the beginning of each development
-cycle, the "merge window" is said to be open. At that time, code which is
-deemed to be sufficiently stable (and which is accepted by the development
-community) is merged into the mainline kernel. The bulk of changes for a
-new development cycle (and all of the major changes) will be merged during
-this time, at a rate approaching 1,000 changes ("patches," or "changesets")
-per day.
-
-(As an aside, it is worth noting that the changes integrated during the
-merge window do not come out of thin air; they have been collected, tested,
-and staged ahead of time. How that process works will be described in
-detail later on).
-
-The merge window lasts for approximately two weeks. At the end of this
-time, Linus Torvalds will declare that the window is closed and release the
-first of the "rc" kernels. For the kernel which is destined to be 2.6.40,
-for example, the release which happens at the end of the merge window will
-be called 2.6.40-rc1. The -rc1 release is the signal that the time to
-merge new features has passed, and that the time to stabilize the next
-kernel has begun.
-
-Over the next six to ten weeks, only patches which fix problems should be
-submitted to the mainline. On occasion a more significant change will be
-allowed, but such occasions are rare; developers who try to merge new
-features outside of the merge window tend to get an unfriendly reception.
-As a general rule, if you miss the merge window for a given feature, the
-best thing to do is to wait for the next development cycle. (An occasional
-exception is made for drivers for previously-unsupported hardware; if they
-touch no in-tree code, they cannot cause regressions and should be safe to
-add at any time).
-
-As fixes make their way into the mainline, the patch rate will slow over
-time. Linus releases new -rc kernels about once a week; a normal series
-will get up to somewhere between -rc6 and -rc9 before the kernel is
-considered to be sufficiently stable and the final 2.6.x release is made.
-At that point the whole process starts over again.
-
-As an example, here is how the 2.6.38 development cycle went (all dates in
-2011):
-
- January 4 2.6.37 stable release
- January 18 2.6.38-rc1, merge window closes
- January 21 2.6.38-rc2
- February 1 2.6.38-rc3
- February 7 2.6.38-rc4
- February 15 2.6.38-rc5
- February 21 2.6.38-rc6
- March 1 2.6.38-rc7
- March 7 2.6.38-rc8
- March 14 2.6.38 stable release
-
-How do the developers decide when to close the development cycle and create
-the stable release? The most significant metric used is the list of
-regressions from previous releases. No bugs are welcome, but those which
-break systems which worked in the past are considered to be especially
-serious. For this reason, patches which cause regressions are looked upon
-unfavorably and are quite likely to be reverted during the stabilization
-period.
-
-The developers' goal is to fix all known regressions before the stable
-release is made. In the real world, this kind of perfection is hard to
-achieve; there are just too many variables in a project of this size.
-There comes a point where delaying the final release just makes the problem
-worse; the pile of changes waiting for the next merge window will grow
-larger, creating even more regressions the next time around. So most 2.6.x
-kernels go out with a handful of known regressions though, hopefully, none
-of them are serious.
-
-Once a stable release is made, its ongoing maintenance is passed off to the
-"stable team," currently consisting of Greg Kroah-Hartman. The stable team
-will release occasional updates to the stable release using the 2.6.x.y
-numbering scheme. To be considered for an update release, a patch must (1)
-fix a significant bug, and (2) already be merged into the mainline for the
-next development kernel. Kernels will typically receive stable updates for
-a little more than one development cycle past their initial release. So,
-for example, the 2.6.36 kernel's history looked like:
-
- October 10 2.6.36 stable release
- November 22 2.6.36.1
- December 9 2.6.36.2
- January 7 2.6.36.3
- February 17 2.6.36.4
-
-2.6.36.4 was the final stable update for the 2.6.36 release.
-
-Some kernels are designated "long term" kernels; they will receive support
-for a longer period. As of this writing, the current long term kernels
-and their maintainers are:
-
- 2.6.27 Willy Tarreau (Deep-frozen stable kernel)
- 2.6.32 Greg Kroah-Hartman
- 2.6.35 Andi Kleen (Embedded flag kernel)
-
-The selection of a kernel for long-term support is purely a matter of a
-maintainer having the need and the time to maintain that release. There
-are no known plans for long-term support for any specific upcoming
-release.
-
-
-2.2: THE LIFECYCLE OF A PATCH
-
-Patches do not go directly from the developer's keyboard into the mainline
-kernel. There is, instead, a somewhat involved (if somewhat informal)
-process designed to ensure that each patch is reviewed for quality and that
-each patch implements a change which is desirable to have in the mainline.
-This process can happen quickly for minor fixes, or, in the case of large
-and controversial changes, go on for years. Much developer frustration
-comes from a lack of understanding of this process or from attempts to
-circumvent it.
-
-In the hopes of reducing that frustration, this document will describe how
-a patch gets into the kernel. What follows below is an introduction which
-describes the process in a somewhat idealized way. A much more detailed
-treatment will come in later sections.
-
-The stages that a patch goes through are, generally:
-
- - Design. This is where the real requirements for the patch - and the way
- those requirements will be met - are laid out. Design work is often
- done without involving the community, but it is better to do this work
- in the open if at all possible; it can save a lot of time redesigning
- things later.
-
- - Early review. Patches are posted to the relevant mailing list, and
- developers on that list reply with any comments they may have. This
- process should turn up any major problems with a patch if all goes
- well.
-
- - Wider review. When the patch is getting close to ready for mainline
- inclusion, it should be accepted by a relevant subsystem maintainer -
- though this acceptance is not a guarantee that the patch will make it
- all the way to the mainline. The patch will show up in the maintainer's
- subsystem tree and into the -next trees (described below). When the
- process works, this step leads to more extensive review of the patch and
- the discovery of any problems resulting from the integration of this
- patch with work being done by others.
-
-- Please note that most maintainers also have day jobs, so merging
- your patch may not be their highest priority. If your patch is
- getting feedback about changes that are needed, you should either
- make those changes or justify why they should not be made. If your
- patch has no review complaints but is not being merged by its
- appropriate subsystem or driver maintainer, you should be persistent
- in updating the patch to the current kernel so that it applies cleanly
- and keep sending it for review and merging.
-
- - Merging into the mainline. Eventually, a successful patch will be
- merged into the mainline repository managed by Linus Torvalds. More
- comments and/or problems may surface at this time; it is important that
- the developer be responsive to these and fix any issues which arise.
-
- - Stable release. The number of users potentially affected by the patch
- is now large, so, once again, new problems may arise.
-
- - Long-term maintenance. While it is certainly possible for a developer
- to forget about code after merging it, that sort of behavior tends to
- leave a poor impression in the development community. Merging code
- eliminates some of the maintenance burden, in that others will fix
- problems caused by API changes. But the original developer should
- continue to take responsibility for the code if it is to remain useful
- in the longer term.
-
-One of the largest mistakes made by kernel developers (or their employers)
-is to try to cut the process down to a single "merging into the mainline"
-step. This approach invariably leads to frustration for everybody
-involved.
-
-
-2.3: HOW PATCHES GET INTO THE KERNEL
-
-There is exactly one person who can merge patches into the mainline kernel
-repository: Linus Torvalds. But, of the over 9,500 patches which went
-into the 2.6.38 kernel, only 112 (around 1.3%) were directly chosen by Linus
-himself. The kernel project has long since grown to a size where no single
-developer could possibly inspect and select every patch unassisted. The
-way the kernel developers have addressed this growth is through the use of
-a lieutenant system built around a chain of trust.
-
-The kernel code base is logically broken down into a set of subsystems:
-networking, specific architecture support, memory management, video
-devices, etc. Most subsystems have a designated maintainer, a developer
-who has overall responsibility for the code within that subsystem. These
-subsystem maintainers are the gatekeepers (in a loose way) for the portion
-of the kernel they manage; they are the ones who will (usually) accept a
-patch for inclusion into the mainline kernel.
-
-Subsystem maintainers each manage their own version of the kernel source
-tree, usually (but certainly not always) using the git source management
-tool. Tools like git (and related tools like quilt or mercurial) allow
-maintainers to track a list of patches, including authorship information
-and other metadata. At any given time, the maintainer can identify which
-patches in his or her repository are not found in the mainline.
-
-When the merge window opens, top-level maintainers will ask Linus to "pull"
-the patches they have selected for merging from their repositories. If
-Linus agrees, the stream of patches will flow up into his repository,
-becoming part of the mainline kernel. The amount of attention that Linus
-pays to specific patches received in a pull operation varies. It is clear
-that, sometimes, he looks quite closely. But, as a general rule, Linus
-trusts the subsystem maintainers to not send bad patches upstream.
-
-Subsystem maintainers, in turn, can pull patches from other maintainers.
-For example, the networking tree is built from patches which accumulated
-first in trees dedicated to network device drivers, wireless networking,
-etc. This chain of repositories can be arbitrarily long, though it rarely
-exceeds two or three links. Since each maintainer in the chain trusts
-those managing lower-level trees, this process is known as the "chain of
-trust."
-
-Clearly, in a system like this, getting patches into the kernel depends on
-finding the right maintainer. Sending patches directly to Linus is not
-normally the right way to go.
-
-
-2.4: NEXT TREES
-
-The chain of subsystem trees guides the flow of patches into the kernel,
-but it also raises an interesting question: what if somebody wants to look
-at all of the patches which are being prepared for the next merge window?
-Developers will be interested in what other changes are pending to see
-whether there are any conflicts to worry about; a patch which changes a
-core kernel function prototype, for example, will conflict with any other
-patches which use the older form of that function. Reviewers and testers
-want access to the changes in their integrated form before all of those
-changes land in the mainline kernel. One could pull changes from all of
-the interesting subsystem trees, but that would be a big and error-prone
-job.
-
-The answer comes in the form of -next trees, where subsystem trees are
-collected for testing and review. The older of these trees, maintained by
-Andrew Morton, is called "-mm" (for memory management, which is how it got
-started). The -mm tree integrates patches from a long list of subsystem
-trees; it also has some patches aimed at helping with debugging.
-
-Beyond that, -mm contains a significant collection of patches which have
-been selected by Andrew directly. These patches may have been posted on a
-mailing list, or they may apply to a part of the kernel for which there is
-no designated subsystem tree. As a result, -mm operates as a sort of
-subsystem tree of last resort; if there is no other obvious path for a
-patch into the mainline, it is likely to end up in -mm. Miscellaneous
-patches which accumulate in -mm will eventually either be forwarded on to
-an appropriate subsystem tree or be sent directly to Linus. In a typical
-development cycle, approximately 5-10% of the patches going into the
-mainline get there via -mm.
-
-The current -mm patch is available in the "mmotm" (-mm of the moment)
-directory at:
-
- http://www.ozlabs.org/~akpm/mmotm/
-
-Use of the MMOTM tree is likely to be a frustrating experience, though;
-there is a definite chance that it will not even compile.
-
-The primary tree for next-cycle patch merging is linux-next, maintained by
-Stephen Rothwell. The linux-next tree is, by design, a snapshot of what
-the mainline is expected to look like after the next merge window closes.
-Linux-next trees are announced on the linux-kernel and linux-next mailing
-lists when they are assembled; they can be downloaded from:
-
- http://www.kernel.org/pub/linux/kernel/next/
-
-Linux-next has become an integral part of the kernel development process;
-all patches merged during a given merge window should really have found
-their way into linux-next some time before the merge window opens.
-
-
-2.4.1: STAGING TREES
-
-The kernel source tree contains the drivers/staging/ directory, where
-many sub-directories for drivers or filesystems that are on their way to
-being added to the kernel tree live. They remain in drivers/staging while
-they still need more work; once complete, they can be moved into the
-kernel proper. This is a way to keep track of drivers that aren't
-up to Linux kernel coding or quality standards, but people may want to use
-them and track development.
-
-Greg Kroah-Hartman currently maintains the staging tree. Drivers that
-still need work are sent to him, with each driver having its own
-subdirectory in drivers/staging/. Along with the driver source files, a
-TODO file should be present in the directory as well. The TODO file lists
-the pending work that the driver needs for acceptance into the kernel
-proper, as well as a list of people that should be Cc'd for any patches to
-the driver. Current rules require that drivers contributed to staging
-must, at a minimum, compile properly.
-
-Staging can be a relatively easy way to get new drivers into the mainline
-where, with luck, they will come to the attention of other developers and
-improve quickly. Entry into staging is not the end of the story, though;
-code in staging which is not seeing regular progress will eventually be
-removed. Distributors also tend to be relatively reluctant to enable
-staging drivers. So staging is, at best, a stop on the way toward becoming
-a proper mainline driver.
-
-
-2.5: TOOLS
-
-As can be seen from the above text, the kernel development process depends
-heavily on the ability to herd collections of patches in various
-directions. The whole thing would not work anywhere near as well as it
-does without suitably powerful tools. Tutorials on how to use these tools
-are well beyond the scope of this document, but there is space for a few
-pointers.
-
-By far the dominant source code management system used by the kernel
-community is git. Git is one of a number of distributed version control
-systems being developed in the free software community. It is well tuned
-for kernel development, in that it performs quite well when dealing with
-large repositories and large numbers of patches. It also has a reputation
-for being difficult to learn and use, though it has gotten better over
-time. Some sort of familiarity with git is almost a requirement for kernel
-developers; even if they do not use it for their own work, they'll need git
-to keep up with what other developers (and the mainline) are doing.
-
-Git is now packaged by almost all Linux distributions. There is a home
-page at:
-
- http://git-scm.com/
-
-That page has pointers to documentation and tutorials.
-
-Among the kernel developers who do not use git, the most popular choice is
-almost certainly Mercurial:
-
- http://www.selenic.com/mercurial/
-
-Mercurial shares many features with git, but it provides an interface which
-many find easier to use.
-
-The other tool worth knowing about is Quilt:
-
- http://savannah.nongnu.org/projects/quilt/
-
-Quilt is a patch management system, rather than a source code management
-system. It does not track history over time; it is, instead, oriented
-toward tracking a specific set of changes against an evolving code base.
-Some major subsystem maintainers use quilt to manage patches intended to go
-upstream. For the management of certain kinds of trees (-mm, for example),
-quilt is the best tool for the job.
-
-
-2.6: MAILING LISTS
-
-A great deal of Linux kernel development work is done by way of mailing
-lists. It is hard to be a fully-functioning member of the community
-without joining at least one list somewhere. But Linux mailing lists also
-represent a potential hazard to developers, who risk getting buried under a
-load of electronic mail, running afoul of the conventions used on the Linux
-lists, or both.
-
-Most kernel mailing lists are run on vger.kernel.org; the master list can
-be found at:
-
- http://vger.kernel.org/vger-lists.html
-
-There are lists hosted elsewhere, though; a number of them are at
-lists.redhat.com.
-
-The core mailing list for kernel development is, of course, linux-kernel.
-This list is an intimidating place to be; volume can reach 500 messages per
-day, the amount of noise is high, the conversation can be severely
-technical, and participants are not always concerned with showing a high
-degree of politeness. But there is no other place where the kernel
-development community comes together as a whole; developers who avoid this
-list will miss important information.
-
-There are a few hints which can help with linux-kernel survival:
-
-- Have the list delivered to a separate folder, rather than your main
- mailbox. One must be able to ignore the stream for sustained periods of
- time.
-
-- Do not try to follow every conversation - nobody else does. It is
- important to filter on both the topic of interest (though note that
- long-running conversations can drift away from the original subject
- without changing the email subject line) and the people who are
- participating.
-
-- Do not feed the trolls. If somebody is trying to stir up an angry
- response, ignore them.
-
-- When responding to linux-kernel email (or that on other lists) preserve
- the Cc: header for all involved. In the absence of a strong reason (such
- as an explicit request), you should never remove recipients. Always make
- sure that the person you are responding to is in the Cc: list. This
- convention also makes it unnecessary to explicitly ask to be copied on
- replies to your postings.
-
-- Search the list archives (and the net as a whole) before asking
- questions. Some developers can get impatient with people who clearly
- have not done their homework.
-
-- Avoid top-posting (the practice of putting your answer above the quoted
- text you are responding to). It makes your response harder to read and
- makes a poor impression.
-
-- Ask on the correct mailing list. Linux-kernel may be the general meeting
- point, but it is not the best place to find developers from all
- subsystems.
-
-The last point - finding the correct mailing list - is a common place for
-beginning developers to go wrong. Somebody who asks a networking-related
-question on linux-kernel will almost certainly receive a polite suggestion
-to ask on the netdev list instead, as that is the list frequented by most
-networking developers. Other lists exist for the SCSI, video4linux, IDE,
-filesystem, etc. subsystems. The best place to look for mailing lists is
-in the MAINTAINERS file packaged with the kernel source.
-
-
-2.7: GETTING STARTED WITH KERNEL DEVELOPMENT
-
-Questions about how to get started with the kernel development process are
-common - from both individuals and companies. Equally common are missteps
-which make the beginning of the relationship harder than it has to be.
-
-Companies often look to hire well-known developers to get a development
-group started. This can, in fact, be an effective technique. But it also
-tends to be expensive and does not do much to grow the pool of experienced
-kernel developers. It is possible to bring in-house developers up to speed
-on Linux kernel development, given the investment of a bit of time. Taking
-this time can endow an employer with a group of developers who understand
-the kernel and the company both, and who can help to train others as well.
-Over the medium term, this is often the more profitable approach.
-
-Individual developers are often, understandably, at a loss for a place to
-start. Beginning with a large project can be intimidating; one often wants
-to test the waters with something smaller first. This is the point where
-some developers jump into the creation of patches fixing spelling errors or
-minor coding style issues. Unfortunately, such patches create a level of
-noise which is distracting for the development community as a whole, so,
-increasingly, they are looked down upon. New developers wishing to
-introduce themselves to the community will not get the sort of reception
-they wish for by these means.
-
-Andrew Morton gives this advice for aspiring kernel developers
-
- The #1 project for all kernel beginners should surely be "make sure
- that the kernel runs perfectly at all times on all machines which
- you can lay your hands on". Usually the way to do this is to work
- with others on getting things fixed up (this can require
- persistence!) but that's fine - it's a part of kernel development.
-
-(http://lwn.net/Articles/283982/).
-
-In the absence of obvious problems to fix, developers are advised to look
-at the current lists of regressions and open bugs in general. There is
-never any shortage of issues in need of fixing; by addressing these issues,
-developers will gain experience with the process while, at the same time,
-building respect with the rest of the development community.
diff --git a/Documentation/development-process/2.Process.rst b/Documentation/development-process/2.Process.rst
new file mode 100644
index 0000000..ce5561b
--- /dev/null
+++ b/Documentation/development-process/2.Process.rst
@@ -0,0 +1,497 @@
+.. _development_process:
+
+How the development process works
+=================================
+
+Linux kernel development in the early 1990's was a pretty loose affair,
+with relatively small numbers of users and developers involved. With a
+user base in the millions and with some 2,000 developers involved over the
+course of one year, the kernel has since had to evolve a number of
+processes to keep development happening smoothly. A solid understanding of
+how the process works is required in order to be an effective part of it.
+
+The big picture
+---------------
+
+The kernel developers use a loosely time-based release process, with a new
+major kernel release happening every two or three months. The recent
+release history looks like this:
+
+ ====== =================
+ 2.6.38 March 14, 2011
+ 2.6.37 January 4, 2011
+ 2.6.36 October 20, 2010
+ 2.6.35 August 1, 2010
+ 2.6.34 May 15, 2010
+ 2.6.33 February 24, 2010
+ ====== =================
+
+Every 2.6.x release is a major kernel release with new features, internal
+API changes, and more. A typical 2.6 release can contain nearly 10,000
+changesets with changes to several hundred thousand lines of code. 2.6 is
+thus the leading edge of Linux kernel development; the kernel uses a
+rolling development model which is continually integrating major changes.
+
+A relatively straightforward discipline is followed with regard to the
+merging of patches for each release. At the beginning of each development
+cycle, the "merge window" is said to be open. At that time, code which is
+deemed to be sufficiently stable (and which is accepted by the development
+community) is merged into the mainline kernel. The bulk of changes for a
+new development cycle (and all of the major changes) will be merged during
+this time, at a rate approaching 1,000 changes ("patches," or "changesets")
+per day.
+
+(As an aside, it is worth noting that the changes integrated during the
+merge window do not come out of thin air; they have been collected, tested,
+and staged ahead of time. How that process works will be described in
+detail later on).
+
+The merge window lasts for approximately two weeks. At the end of this
+time, Linus Torvalds will declare that the window is closed and release the
+first of the "rc" kernels. For the kernel which is destined to be 2.6.40,
+for example, the release which happens at the end of the merge window will
+be called 2.6.40-rc1. The -rc1 release is the signal that the time to
+merge new features has passed, and that the time to stabilize the next
+kernel has begun.
+
+Over the next six to ten weeks, only patches which fix problems should be
+submitted to the mainline. On occasion a more significant change will be
+allowed, but such occasions are rare; developers who try to merge new
+features outside of the merge window tend to get an unfriendly reception.
+As a general rule, if you miss the merge window for a given feature, the
+best thing to do is to wait for the next development cycle. (An occasional
+exception is made for drivers for previously-unsupported hardware; if they
+touch no in-tree code, they cannot cause regressions and should be safe to
+add at any time).
+
+As fixes make their way into the mainline, the patch rate will slow over
+time. Linus releases new -rc kernels about once a week; a normal series
+will get up to somewhere between -rc6 and -rc9 before the kernel is
+considered to be sufficiently stable and the final 2.6.x release is made.
+At that point the whole process starts over again.
+
+As an example, here is how the 2.6.38 development cycle went (all dates in
+2011):
+
+ ============== ===============================
+ January 4 2.6.37 stable release
+ January 18 2.6.38-rc1, merge window closes
+ January 21 2.6.38-rc2
+ February 1 2.6.38-rc3
+ February 7 2.6.38-rc4
+ February 15 2.6.38-rc5
+ February 21 2.6.38-rc6
+ March 1 2.6.38-rc7
+ March 7 2.6.38-rc8
+ March 14 2.6.38 stable release
+ ============== ===============================
+
+How do the developers decide when to close the development cycle and create
+the stable release? The most significant metric used is the list of
+regressions from previous releases. No bugs are welcome, but those which
+break systems which worked in the past are considered to be especially
+serious. For this reason, patches which cause regressions are looked upon
+unfavorably and are quite likely to be reverted during the stabilization
+period.
+
+The developers' goal is to fix all known regressions before the stable
+release is made. In the real world, this kind of perfection is hard to
+achieve; there are just too many variables in a project of this size.
+There comes a point where delaying the final release just makes the problem
+worse; the pile of changes waiting for the next merge window will grow
+larger, creating even more regressions the next time around. So most 2.6.x
+kernels go out with a handful of known regressions though, hopefully, none
+of them are serious.
+
+Once a stable release is made, its ongoing maintenance is passed off to the
+"stable team," currently consisting of Greg Kroah-Hartman. The stable team
+will release occasional updates to the stable release using the 2.6.x.y
+numbering scheme. To be considered for an update release, a patch must (1)
+fix a significant bug, and (2) already be merged into the mainline for the
+next development kernel. Kernels will typically receive stable updates for
+a little more than one development cycle past their initial release. So,
+for example, the 2.6.36 kernel's history looked like:
+
+ ============== ===============================
+ October 10 2.6.36 stable release
+ November 22 2.6.36.1
+ December 9 2.6.36.2
+ January 7 2.6.36.3
+ February 17 2.6.36.4
+ ============== ===============================
+
+2.6.36.4 was the final stable update for the 2.6.36 release.
+
+Some kernels are designated "long term" kernels; they will receive support
+for a longer period. As of this writing, the current long term kernels
+and their maintainers are:
+
+ ====== ====================== ===========================
+ 2.6.27 Willy Tarreau (Deep-frozen stable kernel)
+ 2.6.32 Greg Kroah-Hartman
+ 2.6.35 Andi Kleen (Embedded flag kernel)
+ ====== ====================== ===========================
+
+The selection of a kernel for long-term support is purely a matter of a
+maintainer having the need and the time to maintain that release. There
+are no known plans for long-term support for any specific upcoming
+release.
+
+
+The lifecycle of a patch
+------------------------
+
+Patches do not go directly from the developer's keyboard into the mainline
+kernel. There is, instead, a somewhat involved (if somewhat informal)
+process designed to ensure that each patch is reviewed for quality and that
+each patch implements a change which is desirable to have in the mainline.
+This process can happen quickly for minor fixes, or, in the case of large
+and controversial changes, go on for years. Much developer frustration
+comes from a lack of understanding of this process or from attempts to
+circumvent it.
+
+In the hopes of reducing that frustration, this document will describe how
+a patch gets into the kernel. What follows below is an introduction which
+describes the process in a somewhat idealized way. A much more detailed
+treatment will come in later sections.
+
+The stages that a patch goes through are, generally:
+
+ - Design. This is where the real requirements for the patch - and the way
+ those requirements will be met - are laid out. Design work is often
+ done without involving the community, but it is better to do this work
+ in the open if at all possible; it can save a lot of time redesigning
+ things later.
+
+ - Early review. Patches are posted to the relevant mailing list, and
+ developers on that list reply with any comments they may have. This
+ process should turn up any major problems with a patch if all goes
+ well.
+
+ - Wider review. When the patch is getting close to ready for mainline
+ inclusion, it should be accepted by a relevant subsystem maintainer -
+ though this acceptance is not a guarantee that the patch will make it
+ all the way to the mainline. The patch will show up in the maintainer's
+ subsystem tree and into the -next trees (described below). When the
+ process works, this step leads to more extensive review of the patch and
+ the discovery of any problems resulting from the integration of this
+ patch with work being done by others.
+
+- Please note that most maintainers also have day jobs, so merging
+ your patch may not be their highest priority. If your patch is
+ getting feedback about changes that are needed, you should either
+ make those changes or justify why they should not be made. If your
+ patch has no review complaints but is not being merged by its
+ appropriate subsystem or driver maintainer, you should be persistent
+ in updating the patch to the current kernel so that it applies cleanly
+ and keep sending it for review and merging.
+
+ - Merging into the mainline. Eventually, a successful patch will be
+ merged into the mainline repository managed by Linus Torvalds. More
+ comments and/or problems may surface at this time; it is important that
+ the developer be responsive to these and fix any issues which arise.
+
+ - Stable release. The number of users potentially affected by the patch
+ is now large, so, once again, new problems may arise.
+
+ - Long-term maintenance. While it is certainly possible for a developer
+ to forget about code after merging it, that sort of behavior tends to
+ leave a poor impression in the development community. Merging code
+ eliminates some of the maintenance burden, in that others will fix
+ problems caused by API changes. But the original developer should
+ continue to take responsibility for the code if it is to remain useful
+ in the longer term.
+
+One of the largest mistakes made by kernel developers (or their employers)
+is to try to cut the process down to a single "merging into the mainline"
+step. This approach invariably leads to frustration for everybody
+involved.
+
+How patches get into the Kernel
+-------------------------------
+
+There is exactly one person who can merge patches into the mainline kernel
+repository: Linus Torvalds. But, of the over 9,500 patches which went
+into the 2.6.38 kernel, only 112 (around 1.3%) were directly chosen by Linus
+himself. The kernel project has long since grown to a size where no single
+developer could possibly inspect and select every patch unassisted. The
+way the kernel developers have addressed this growth is through the use of
+a lieutenant system built around a chain of trust.
+
+The kernel code base is logically broken down into a set of subsystems:
+networking, specific architecture support, memory management, video
+devices, etc. Most subsystems have a designated maintainer, a developer
+who has overall responsibility for the code within that subsystem. These
+subsystem maintainers are the gatekeepers (in a loose way) for the portion
+of the kernel they manage; they are the ones who will (usually) accept a
+patch for inclusion into the mainline kernel.
+
+Subsystem maintainers each manage their own version of the kernel source
+tree, usually (but certainly not always) using the git source management
+tool. Tools like git (and related tools like quilt or mercurial) allow
+maintainers to track a list of patches, including authorship information
+and other metadata. At any given time, the maintainer can identify which
+patches in his or her repository are not found in the mainline.
+
+When the merge window opens, top-level maintainers will ask Linus to "pull"
+the patches they have selected for merging from their repositories. If
+Linus agrees, the stream of patches will flow up into his repository,
+becoming part of the mainline kernel. The amount of attention that Linus
+pays to specific patches received in a pull operation varies. It is clear
+that, sometimes, he looks quite closely. But, as a general rule, Linus
+trusts the subsystem maintainers to not send bad patches upstream.
+
+Subsystem maintainers, in turn, can pull patches from other maintainers.
+For example, the networking tree is built from patches which accumulated
+first in trees dedicated to network device drivers, wireless networking,
+etc. This chain of repositories can be arbitrarily long, though it rarely
+exceeds two or three links. Since each maintainer in the chain trusts
+those managing lower-level trees, this process is known as the "chain of
+trust."
+
+Clearly, in a system like this, getting patches into the kernel depends on
+finding the right maintainer. Sending patches directly to Linus is not
+normally the right way to go.
+
+
+Next trees
+----------
+
+The chain of subsystem trees guides the flow of patches into the kernel,
+but it also raises an interesting question: what if somebody wants to look
+at all of the patches which are being prepared for the next merge window?
+Developers will be interested in what other changes are pending to see
+whether there are any conflicts to worry about; a patch which changes a
+core kernel function prototype, for example, will conflict with any other
+patches which use the older form of that function. Reviewers and testers
+want access to the changes in their integrated form before all of those
+changes land in the mainline kernel. One could pull changes from all of
+the interesting subsystem trees, but that would be a big and error-prone
+job.
+
+The answer comes in the form of -next trees, where subsystem trees are
+collected for testing and review. The older of these trees, maintained by
+Andrew Morton, is called "-mm" (for memory management, which is how it got
+started). The -mm tree integrates patches from a long list of subsystem
+trees; it also has some patches aimed at helping with debugging.
+
+Beyond that, -mm contains a significant collection of patches which have
+been selected by Andrew directly. These patches may have been posted on a
+mailing list, or they may apply to a part of the kernel for which there is
+no designated subsystem tree. As a result, -mm operates as a sort of
+subsystem tree of last resort; if there is no other obvious path for a
+patch into the mainline, it is likely to end up in -mm. Miscellaneous
+patches which accumulate in -mm will eventually either be forwarded on to
+an appropriate subsystem tree or be sent directly to Linus. In a typical
+development cycle, approximately 5-10% of the patches going into the
+mainline get there via -mm.
+
+The current -mm patch is available in the "mmotm" (-mm of the moment)
+directory at:
+
+ http://www.ozlabs.org/~akpm/mmotm/
+
+Use of the MMOTM tree is likely to be a frustrating experience, though;
+there is a definite chance that it will not even compile.
+
+The primary tree for next-cycle patch merging is linux-next, maintained by
+Stephen Rothwell. The linux-next tree is, by design, a snapshot of what
+the mainline is expected to look like after the next merge window closes.
+Linux-next trees are announced on the linux-kernel and linux-next mailing
+lists when they are assembled; they can be downloaded from:
+
+ http://www.kernel.org/pub/linux/kernel/next/
+
+Linux-next has become an integral part of the kernel development process;
+all patches merged during a given merge window should really have found
+their way into linux-next some time before the merge window opens.
+
+
+Staging trees
+-------------
+
+The kernel source tree contains the drivers/staging/ directory, where
+many sub-directories for drivers or filesystems that are on their way to
+being added to the kernel tree live. They remain in drivers/staging while
+they still need more work; once complete, they can be moved into the
+kernel proper. This is a way to keep track of drivers that aren't
+up to Linux kernel coding or quality standards, but people may want to use
+them and track development.
+
+Greg Kroah-Hartman currently maintains the staging tree. Drivers that
+still need work are sent to him, with each driver having its own
+subdirectory in drivers/staging/. Along with the driver source files, a
+TODO file should be present in the directory as well. The TODO file lists
+the pending work that the driver needs for acceptance into the kernel
+proper, as well as a list of people that should be Cc'd for any patches to
+the driver. Current rules require that drivers contributed to staging
+must, at a minimum, compile properly.
+
+Staging can be a relatively easy way to get new drivers into the mainline
+where, with luck, they will come to the attention of other developers and
+improve quickly. Entry into staging is not the end of the story, though;
+code in staging which is not seeing regular progress will eventually be
+removed. Distributors also tend to be relatively reluctant to enable
+staging drivers. So staging is, at best, a stop on the way toward becoming
+a proper mainline driver.
+
+
+Tools
+-----
+
+As can be seen from the above text, the kernel development process depends
+heavily on the ability to herd collections of patches in various
+directions. The whole thing would not work anywhere near as well as it
+does without suitably powerful tools. Tutorials on how to use these tools
+are well beyond the scope of this document, but there is space for a few
+pointers.
+
+By far the dominant source code management system used by the kernel
+community is git. Git is one of a number of distributed version control
+systems being developed in the free software community. It is well tuned
+for kernel development, in that it performs quite well when dealing with
+large repositories and large numbers of patches. It also has a reputation
+for being difficult to learn and use, though it has gotten better over
+time. Some sort of familiarity with git is almost a requirement for kernel
+developers; even if they do not use it for their own work, they'll need git
+to keep up with what other developers (and the mainline) are doing.
+
+Git is now packaged by almost all Linux distributions. There is a home
+page at:
+
+ http://git-scm.com/
+
+That page has pointers to documentation and tutorials.
+
+Among the kernel developers who do not use git, the most popular choice is
+almost certainly Mercurial:
+
+ http://www.selenic.com/mercurial/
+
+Mercurial shares many features with git, but it provides an interface which
+many find easier to use.
+
+The other tool worth knowing about is Quilt:
+
+ http://savannah.nongnu.org/projects/quilt/
+
+Quilt is a patch management system, rather than a source code management
+system. It does not track history over time; it is, instead, oriented
+toward tracking a specific set of changes against an evolving code base.
+Some major subsystem maintainers use quilt to manage patches intended to go
+upstream. For the management of certain kinds of trees (-mm, for example),
+quilt is the best tool for the job.
+
+
+Mailing lists
+-------------
+
+A great deal of Linux kernel development work is done by way of mailing
+lists. It is hard to be a fully-functioning member of the community
+without joining at least one list somewhere. But Linux mailing lists also
+represent a potential hazard to developers, who risk getting buried under a
+load of electronic mail, running afoul of the conventions used on the Linux
+lists, or both.
+
+Most kernel mailing lists are run on vger.kernel.org; the master list can
+be found at:
+
+ http://vger.kernel.org/vger-lists.html
+
+There are lists hosted elsewhere, though; a number of them are at
+lists.redhat.com.
+
+The core mailing list for kernel development is, of course, linux-kernel.
+This list is an intimidating place to be; volume can reach 500 messages per
+day, the amount of noise is high, the conversation can be severely
+technical, and participants are not always concerned with showing a high
+degree of politeness. But there is no other place where the kernel
+development community comes together as a whole; developers who avoid this
+list will miss important information.
+
+There are a few hints which can help with linux-kernel survival:
+
+- Have the list delivered to a separate folder, rather than your main
+ mailbox. One must be able to ignore the stream for sustained periods of
+ time.
+
+- Do not try to follow every conversation - nobody else does. It is
+ important to filter on both the topic of interest (though note that
+ long-running conversations can drift away from the original subject
+ without changing the email subject line) and the people who are
+ participating.
+
+- Do not feed the trolls. If somebody is trying to stir up an angry
+ response, ignore them.
+
+- When responding to linux-kernel email (or that on other lists) preserve
+ the Cc: header for all involved. In the absence of a strong reason (such
+ as an explicit request), you should never remove recipients. Always make
+ sure that the person you are responding to is in the Cc: list. This
+ convention also makes it unnecessary to explicitly ask to be copied on
+ replies to your postings.
+
+- Search the list archives (and the net as a whole) before asking
+ questions. Some developers can get impatient with people who clearly
+ have not done their homework.
+
+- Avoid top-posting (the practice of putting your answer above the quoted
+ text you are responding to). It makes your response harder to read and
+ makes a poor impression.
+
+- Ask on the correct mailing list. Linux-kernel may be the general meeting
+ point, but it is not the best place to find developers from all
+ subsystems.
+
+The last point - finding the correct mailing list - is a common place for
+beginning developers to go wrong. Somebody who asks a networking-related
+question on linux-kernel will almost certainly receive a polite suggestion
+to ask on the netdev list instead, as that is the list frequented by most
+networking developers. Other lists exist for the SCSI, video4linux, IDE,
+filesystem, etc. subsystems. The best place to look for mailing lists is
+in the MAINTAINERS file packaged with the kernel source.
+
+
+Getting started with Kernel development
+---------------------------------------
+
+Questions about how to get started with the kernel development process are
+common - from both individuals and companies. Equally common are missteps
+which make the beginning of the relationship harder than it has to be.
+
+Companies often look to hire well-known developers to get a development
+group started. This can, in fact, be an effective technique. But it also
+tends to be expensive and does not do much to grow the pool of experienced
+kernel developers. It is possible to bring in-house developers up to speed
+on Linux kernel development, given the investment of a bit of time. Taking
+this time can endow an employer with a group of developers who understand
+the kernel and the company both, and who can help to train others as well.
+Over the medium term, this is often the more profitable approach.
+
+Individual developers are often, understandably, at a loss for a place to
+start. Beginning with a large project can be intimidating; one often wants
+to test the waters with something smaller first. This is the point where
+some developers jump into the creation of patches fixing spelling errors or
+minor coding style issues. Unfortunately, such patches create a level of
+noise which is distracting for the development community as a whole, so,
+increasingly, they are looked down upon. New developers wishing to
+introduce themselves to the community will not get the sort of reception
+they wish for by these means.
+
+Andrew Morton gives this advice for aspiring kernel developers
+
+::
+
+ The #1 project for all kernel beginners should surely be "make sure
+ that the kernel runs perfectly at all times on all machines which
+ you can lay your hands on". Usually the way to do this is to work
+ with others on getting things fixed up (this can require
+ persistence!) but that's fine - it's a part of kernel development.
+
+(http://lwn.net/Articles/283982/).
+
+In the absence of obvious problems to fix, developers are advised to look
+at the current lists of regressions and open bugs in general. There is
+never any shortage of issues in need of fixing; by addressing these issues,
+developers will gain experience with the process while, at the same time,
+building respect with the rest of the development community.
diff --git a/Documentation/development-process/3.Early-stage b/Documentation/development-process/3.Early-stage
deleted file mode 100644
index f87ba7b..0000000
--- a/Documentation/development-process/3.Early-stage
+++ /dev/null
@@ -1,212 +0,0 @@
-3: EARLY-STAGE PLANNING
-
-When contemplating a Linux kernel development project, it can be tempting
-to jump right in and start coding. As with any significant project,
-though, much of the groundwork for success is best laid before the first
-line of code is written. Some time spent in early planning and
-communication can save far more time later on.
-
-
-3.1: SPECIFYING THE PROBLEM
-
-Like any engineering project, a successful kernel enhancement starts with a
-clear description of the problem to be solved. In some cases, this step is
-easy: when a driver is needed for a specific piece of hardware, for
-example. In others, though, it is tempting to confuse the real problem
-with the proposed solution, and that can lead to difficulties.
-
-Consider an example: some years ago, developers working with Linux audio
-sought a way to run applications without dropouts or other artifacts caused
-by excessive latency in the system. The solution they arrived at was a
-kernel module intended to hook into the Linux Security Module (LSM)
-framework; this module could be configured to give specific applications
-access to the realtime scheduler. This module was implemented and sent to
-the linux-kernel mailing list, where it immediately ran into problems.
-
-To the audio developers, this security module was sufficient to solve their
-immediate problem. To the wider kernel community, though, it was seen as a
-misuse of the LSM framework (which is not intended to confer privileges
-onto processes which they would not otherwise have) and a risk to system
-stability. Their preferred solutions involved realtime scheduling access
-via the rlimit mechanism for the short term, and ongoing latency reduction
-work in the long term.
-
-The audio community, however, could not see past the particular solution
-they had implemented; they were unwilling to accept alternatives. The
-resulting disagreement left those developers feeling disillusioned with the
-entire kernel development process; one of them went back to an audio list
-and posted this:
-
- There are a number of very good Linux kernel developers, but they
- tend to get outshouted by a large crowd of arrogant fools. Trying
- to communicate user requirements to these people is a waste of
- time. They are much too "intelligent" to listen to lesser mortals.
-
-(http://lwn.net/Articles/131776/).
-
-The reality of the situation was different; the kernel developers were far
-more concerned about system stability, long-term maintenance, and finding
-the right solution to the problem than they were with a specific module.
-The moral of the story is to focus on the problem - not a specific solution
-- and to discuss it with the development community before investing in the
-creation of a body of code.
-
-So, when contemplating a kernel development project, one should obtain
-answers to a short set of questions:
-
- - What, exactly, is the problem which needs to be solved?
-
- - Who are the users affected by this problem? Which use cases should the
- solution address?
-
- - How does the kernel fall short in addressing that problem now?
-
-Only then does it make sense to start considering possible solutions.
-
-
-3.2: EARLY DISCUSSION
-
-When planning a kernel development project, it makes great sense to hold
-discussions with the community before launching into implementation. Early
-communication can save time and trouble in a number of ways:
-
- - It may well be that the problem is addressed by the kernel in ways which
- you have not understood. The Linux kernel is large and has a number of
- features and capabilities which are not immediately obvious. Not all
- kernel capabilities are documented as well as one might like, and it is
- easy to miss things. Your author has seen the posting of a complete
- driver which duplicated an existing driver that the new author had been
- unaware of. Code which reinvents existing wheels is not only wasteful;
- it will also not be accepted into the mainline kernel.
-
- - There may be elements of the proposed solution which will not be
- acceptable for mainline merging. It is better to find out about
- problems like this before writing the code.
-
- - It's entirely possible that other developers have thought about the
- problem; they may have ideas for a better solution, and may be willing
- to help in the creation of that solution.
-
-Years of experience with the kernel development community have taught a
-clear lesson: kernel code which is designed and developed behind closed
-doors invariably has problems which are only revealed when the code is
-released into the community. Sometimes these problems are severe,
-requiring months or years of effort before the code can be brought up to
-the kernel community's standards. Some examples include:
-
- - The Devicescape network stack was designed and implemented for
- single-processor systems. It could not be merged into the mainline
- until it was made suitable for multiprocessor systems. Retrofitting
- locking and such into code is a difficult task; as a result, the merging
- of this code (now called mac80211) was delayed for over a year.
-
- - The Reiser4 filesystem included a number of capabilities which, in the
- core kernel developers' opinion, should have been implemented in the
- virtual filesystem layer instead. It also included features which could
- not easily be implemented without exposing the system to user-caused
- deadlocks. The late revelation of these problems - and refusal to
- address some of them - has caused Reiser4 to stay out of the mainline
- kernel.
-
- - The AppArmor security module made use of internal virtual filesystem
- data structures in ways which were considered to be unsafe and
- unreliable. This concern (among others) kept AppArmor out of the
- mainline for years.
-
-In each of these cases, a great deal of pain and extra work could have been
-avoided with some early discussion with the kernel developers.
-
-
-3.3: WHO DO YOU TALK TO?
-
-When developers decide to take their plans public, the next question will
-be: where do we start? The answer is to find the right mailing list(s) and
-the right maintainer. For mailing lists, the best approach is to look in
-the MAINTAINERS file for a relevant place to post. If there is a suitable
-subsystem list, posting there is often preferable to posting on
-linux-kernel; you are more likely to reach developers with expertise in the
-relevant subsystem and the environment may be more supportive.
-
-Finding maintainers can be a bit harder. Again, the MAINTAINERS file is
-the place to start. That file tends to not always be up to date, though,
-and not all subsystems are represented there. The person listed in the
-MAINTAINERS file may, in fact, not be the person who is actually acting in
-that role currently. So, when there is doubt about who to contact, a
-useful trick is to use git (and "git log" in particular) to see who is
-currently active within the subsystem of interest. Look at who is writing
-patches, and who, if anybody, is attaching Signed-off-by lines to those
-patches. Those are the people who will be best placed to help with a new
-development project.
-
-The task of finding the right maintainer is sometimes challenging enough
-that the kernel developers have added a script to ease the process:
-
- .../scripts/get_maintainer.pl
-
-This script will return the current maintainer(s) for a given file or
-directory when given the "-f" option. If passed a patch on the
-command line, it will list the maintainers who should probably receive
-copies of the patch. There are a number of options regulating how hard
-get_maintainer.pl will search for maintainers; please be careful about
-using the more aggressive options as you may end up including developers
-who have no real interest in the code you are modifying.
-
-If all else fails, talking to Andrew Morton can be an effective way to
-track down a maintainer for a specific piece of code.
-
-
-3.4: WHEN TO POST?
-
-If possible, posting your plans during the early stages can only be
-helpful. Describe the problem being solved and any plans that have been
-made on how the implementation will be done. Any information you can
-provide can help the development community provide useful input on the
-project.
-
-One discouraging thing which can happen at this stage is not a hostile
-reaction, but, instead, little or no reaction at all. The sad truth of the
-matter is (1) kernel developers tend to be busy, (2) there is no shortage
-of people with grand plans and little code (or even prospect of code) to
-back them up, and (3) nobody is obligated to review or comment on ideas
-posted by others. Beyond that, high-level designs often hide problems
-which are only reviewed when somebody actually tries to implement those
-designs; for that reason, kernel developers would rather see the code.
-
-If a request-for-comments posting yields little in the way of comments, do
-not assume that it means there is no interest in the project.
-Unfortunately, you also cannot assume that there are no problems with your
-idea. The best thing to do in this situation is to proceed, keeping the
-community informed as you go.
-
-
-3.5: GETTING OFFICIAL BUY-IN
-
-If your work is being done in a corporate environment - as most Linux
-kernel work is - you must, obviously, have permission from suitably
-empowered managers before you can post your company's plans or code to a
-public mailing list. The posting of code which has not been cleared for
-release under a GPL-compatible license can be especially problematic; the
-sooner that a company's management and legal staff can agree on the posting
-of a kernel development project, the better off everybody involved will be.
-
-Some readers may be thinking at this point that their kernel work is
-intended to support a product which does not yet have an officially
-acknowledged existence. Revealing their employer's plans on a public
-mailing list may not be a viable option. In cases like this, it is worth
-considering whether the secrecy is really necessary; there is often no real
-need to keep development plans behind closed doors.
-
-That said, there are also cases where a company legitimately cannot
-disclose its plans early in the development process. Companies with
-experienced kernel developers may choose to proceed in an open-loop manner
-on the assumption that they will be able to avoid serious integration
-problems later. For companies without that sort of in-house expertise, the
-best option is often to hire an outside developer to review the plans under
-a non-disclosure agreement. The Linux Foundation operates an NDA program
-designed to help with this sort of situation; more information can be found
-at:
-
- http://www.linuxfoundation.org/en/NDA_program
-
-This kind of review is often enough to avoid serious problems later on
-without requiring public disclosure of the project.
diff --git a/Documentation/development-process/3.Early-stage.rst b/Documentation/development-process/3.Early-stage.rst
new file mode 100644
index 0000000..af2c0af
--- /dev/null
+++ b/Documentation/development-process/3.Early-stage.rst
@@ -0,0 +1,222 @@
+.. _development_early_stage:
+
+Early-stage planning
+====================
+
+When contemplating a Linux kernel development project, it can be tempting
+to jump right in and start coding. As with any significant project,
+though, much of the groundwork for success is best laid before the first
+line of code is written. Some time spent in early planning and
+communication can save far more time later on.
+
+
+Specifying the problem
+----------------------
+
+Like any engineering project, a successful kernel enhancement starts with a
+clear description of the problem to be solved. In some cases, this step is
+easy: when a driver is needed for a specific piece of hardware, for
+example. In others, though, it is tempting to confuse the real problem
+with the proposed solution, and that can lead to difficulties.
+
+Consider an example: some years ago, developers working with Linux audio
+sought a way to run applications without dropouts or other artifacts caused
+by excessive latency in the system. The solution they arrived at was a
+kernel module intended to hook into the Linux Security Module (LSM)
+framework; this module could be configured to give specific applications
+access to the realtime scheduler. This module was implemented and sent to
+the linux-kernel mailing list, where it immediately ran into problems.
+
+To the audio developers, this security module was sufficient to solve their
+immediate problem. To the wider kernel community, though, it was seen as a
+misuse of the LSM framework (which is not intended to confer privileges
+onto processes which they would not otherwise have) and a risk to system
+stability. Their preferred solutions involved realtime scheduling access
+via the rlimit mechanism for the short term, and ongoing latency reduction
+work in the long term.
+
+The audio community, however, could not see past the particular solution
+they had implemented; they were unwilling to accept alternatives. The
+resulting disagreement left those developers feeling disillusioned with the
+entire kernel development process; one of them went back to an audio list
+and posted this:
+
+ There are a number of very good Linux kernel developers, but they
+ tend to get outshouted by a large crowd of arrogant fools. Trying
+ to communicate user requirements to these people is a waste of
+ time. They are much too "intelligent" to listen to lesser mortals.
+
+(http://lwn.net/Articles/131776/).
+
+The reality of the situation was different; the kernel developers were far
+more concerned about system stability, long-term maintenance, and finding
+the right solution to the problem than they were with a specific module.
+The moral of the story is to focus on the problem - not a specific solution
+- and to discuss it with the development community before investing in the
+creation of a body of code.
+
+So, when contemplating a kernel development project, one should obtain
+answers to a short set of questions:
+
+ - What, exactly, is the problem which needs to be solved?
+
+ - Who are the users affected by this problem? Which use cases should the
+ solution address?
+
+ - How does the kernel fall short in addressing that problem now?
+
+Only then does it make sense to start considering possible solutions.
+
+
+Early discussion
+----------------
+
+When planning a kernel development project, it makes great sense to hold
+discussions with the community before launching into implementation. Early
+communication can save time and trouble in a number of ways:
+
+ - It may well be that the problem is addressed by the kernel in ways which
+ you have not understood. The Linux kernel is large and has a number of
+ features and capabilities which are not immediately obvious. Not all
+ kernel capabilities are documented as well as one might like, and it is
+ easy to miss things. Your author has seen the posting of a complete
+ driver which duplicated an existing driver that the new author had been
+ unaware of. Code which reinvents existing wheels is not only wasteful;
+ it will also not be accepted into the mainline kernel.
+
+ - There may be elements of the proposed solution which will not be
+ acceptable for mainline merging. It is better to find out about
+ problems like this before writing the code.
+
+ - It's entirely possible that other developers have thought about the
+ problem; they may have ideas for a better solution, and may be willing
+ to help in the creation of that solution.
+
+Years of experience with the kernel development community have taught a
+clear lesson: kernel code which is designed and developed behind closed
+doors invariably has problems which are only revealed when the code is
+released into the community. Sometimes these problems are severe,
+requiring months or years of effort before the code can be brought up to
+the kernel community's standards. Some examples include:
+
+ - The Devicescape network stack was designed and implemented for
+ single-processor systems. It could not be merged into the mainline
+ until it was made suitable for multiprocessor systems. Retrofitting
+ locking and such into code is a difficult task; as a result, the merging
+ of this code (now called mac80211) was delayed for over a year.
+
+ - The Reiser4 filesystem included a number of capabilities which, in the
+ core kernel developers' opinion, should have been implemented in the
+ virtual filesystem layer instead. It also included features which could
+ not easily be implemented without exposing the system to user-caused
+ deadlocks. The late revelation of these problems - and refusal to
+ address some of them - has caused Reiser4 to stay out of the mainline
+ kernel.
+
+ - The AppArmor security module made use of internal virtual filesystem
+ data structures in ways which were considered to be unsafe and
+ unreliable. This concern (among others) kept AppArmor out of the
+ mainline for years.
+
+In each of these cases, a great deal of pain and extra work could have been
+avoided with some early discussion with the kernel developers.
+
+
+Who do you talk to?
+-------------------
+
+When developers decide to take their plans public, the next question will
+be: where do we start? The answer is to find the right mailing list(s) and
+the right maintainer. For mailing lists, the best approach is to look in
+the MAINTAINERS file for a relevant place to post. If there is a suitable
+subsystem list, posting there is often preferable to posting on
+linux-kernel; you are more likely to reach developers with expertise in the
+relevant subsystem and the environment may be more supportive.
+
+Finding maintainers can be a bit harder. Again, the MAINTAINERS file is
+the place to start. That file tends to not always be up to date, though,
+and not all subsystems are represented there. The person listed in the
+MAINTAINERS file may, in fact, not be the person who is actually acting in
+that role currently. So, when there is doubt about who to contact, a
+useful trick is to use git (and "git log" in particular) to see who is
+currently active within the subsystem of interest. Look at who is writing
+patches, and who, if anybody, is attaching Signed-off-by lines to those
+patches. Those are the people who will be best placed to help with a new
+development project.
+
+The task of finding the right maintainer is sometimes challenging enough
+that the kernel developers have added a script to ease the process:
+
+::
+
+ .../scripts/get_maintainer.pl
+
+This script will return the current maintainer(s) for a given file or
+directory when given the "-f" option. If passed a patch on the
+command line, it will list the maintainers who should probably receive
+copies of the patch. There are a number of options regulating how hard
+get_maintainer.pl will search for maintainers; please be careful about
+using the more aggressive options as you may end up including developers
+who have no real interest in the code you are modifying.
+
+If all else fails, talking to Andrew Morton can be an effective way to
+track down a maintainer for a specific piece of code.
+
+
+When to post?
+-------------
+
+If possible, posting your plans during the early stages can only be
+helpful. Describe the problem being solved and any plans that have been
+made on how the implementation will be done. Any information you can
+provide can help the development community provide useful input on the
+project.
+
+One discouraging thing which can happen at this stage is not a hostile
+reaction, but, instead, little or no reaction at all. The sad truth of the
+matter is (1) kernel developers tend to be busy, (2) there is no shortage
+of people with grand plans and little code (or even prospect of code) to
+back them up, and (3) nobody is obligated to review or comment on ideas
+posted by others. Beyond that, high-level designs often hide problems
+which are only reviewed when somebody actually tries to implement those
+designs; for that reason, kernel developers would rather see the code.
+
+If a request-for-comments posting yields little in the way of comments, do
+not assume that it means there is no interest in the project.
+Unfortunately, you also cannot assume that there are no problems with your
+idea. The best thing to do in this situation is to proceed, keeping the
+community informed as you go.
+
+
+Getting official buy-in
+-----------------------
+
+If your work is being done in a corporate environment - as most Linux
+kernel work is - you must, obviously, have permission from suitably
+empowered managers before you can post your company's plans or code to a
+public mailing list. The posting of code which has not been cleared for
+release under a GPL-compatible license can be especially problematic; the
+sooner that a company's management and legal staff can agree on the posting
+of a kernel development project, the better off everybody involved will be.
+
+Some readers may be thinking at this point that their kernel work is
+intended to support a product which does not yet have an officially
+acknowledged existence. Revealing their employer's plans on a public
+mailing list may not be a viable option. In cases like this, it is worth
+considering whether the secrecy is really necessary; there is often no real
+need to keep development plans behind closed doors.
+
+That said, there are also cases where a company legitimately cannot
+disclose its plans early in the development process. Companies with
+experienced kernel developers may choose to proceed in an open-loop manner
+on the assumption that they will be able to avoid serious integration
+problems later. For companies without that sort of in-house expertise, the
+best option is often to hire an outside developer to review the plans under
+a non-disclosure agreement. The Linux Foundation operates an NDA program
+designed to help with this sort of situation; more information can be found
+at:
+
+ http://www.linuxfoundation.org/en/NDA_program
+
+This kind of review is often enough to avoid serious problems later on
+without requiring public disclosure of the project.
diff --git a/Documentation/development-process/4.Coding b/Documentation/development-process/4.Coding
deleted file mode 100644
index 9a3ee77..0000000
--- a/Documentation/development-process/4.Coding
+++ /dev/null
@@ -1,399 +0,0 @@
-4: GETTING THE CODE RIGHT
-
-While there is much to be said for a solid and community-oriented design
-process, the proof of any kernel development project is in the resulting
-code. It is the code which will be examined by other developers and merged
-(or not) into the mainline tree. So it is the quality of this code which
-will determine the ultimate success of the project.
-
-This section will examine the coding process. We'll start with a look at a
-number of ways in which kernel developers can go wrong. Then the focus
-will shift toward doing things right and the tools which can help in that
-quest.
-
-
-4.1: PITFALLS
-
-* Coding style
-
-The kernel has long had a standard coding style, described in
-Documentation/CodingStyle. For much of that time, the policies described
-in that file were taken as being, at most, advisory. As a result, there is
-a substantial amount of code in the kernel which does not meet the coding
-style guidelines. The presence of that code leads to two independent
-hazards for kernel developers.
-
-The first of these is to believe that the kernel coding standards do not
-matter and are not enforced. The truth of the matter is that adding new
-code to the kernel is very difficult if that code is not coded according to
-the standard; many developers will request that the code be reformatted
-before they will even review it. A code base as large as the kernel
-requires some uniformity of code to make it possible for developers to
-quickly understand any part of it. So there is no longer room for
-strangely-formatted code.
-
-Occasionally, the kernel's coding style will run into conflict with an
-employer's mandated style. In such cases, the kernel's style will have to
-win before the code can be merged. Putting code into the kernel means
-giving up a degree of control in a number of ways - including control over
-how the code is formatted.
-
-The other trap is to assume that code which is already in the kernel is
-urgently in need of coding style fixes. Developers may start to generate
-reformatting patches as a way of gaining familiarity with the process, or
-as a way of getting their name into the kernel changelogs - or both. But
-pure coding style fixes are seen as noise by the development community;
-they tend to get a chilly reception. So this type of patch is best
-avoided. It is natural to fix the style of a piece of code while working
-on it for other reasons, but coding style changes should not be made for
-their own sake.
-
-The coding style document also should not be read as an absolute law which
-can never be transgressed. If there is a good reason to go against the
-style (a line which becomes far less readable if split to fit within the
-80-column limit, for example), just do it.
-
-
-* Abstraction layers
-
-Computer Science professors teach students to make extensive use of
-abstraction layers in the name of flexibility and information hiding.
-Certainly the kernel makes extensive use of abstraction; no project
-involving several million lines of code could do otherwise and survive.
-But experience has shown that excessive or premature abstraction can be
-just as harmful as premature optimization. Abstraction should be used to
-the level required and no further.
-
-At a simple level, consider a function which has an argument which is
-always passed as zero by all callers. One could retain that argument just
-in case somebody eventually needs to use the extra flexibility that it
-provides. By that time, though, chances are good that the code which
-implements this extra argument has been broken in some subtle way which was
-never noticed - because it has never been used. Or, when the need for
-extra flexibility arises, it does not do so in a way which matches the
-programmer's early expectation. Kernel developers will routinely submit
-patches to remove unused arguments; they should, in general, not be added
-in the first place.
-
-Abstraction layers which hide access to hardware - often to allow the bulk
-of a driver to be used with multiple operating systems - are especially
-frowned upon. Such layers obscure the code and may impose a performance
-penalty; they do not belong in the Linux kernel.
-
-On the other hand, if you find yourself copying significant amounts of code
-from another kernel subsystem, it is time to ask whether it would, in fact,
-make sense to pull out some of that code into a separate library or to
-implement that functionality at a higher level. There is no value in
-replicating the same code throughout the kernel.
-
-
-* #ifdef and preprocessor use in general
-
-The C preprocessor seems to present a powerful temptation to some C
-programmers, who see it as a way to efficiently encode a great deal of
-flexibility into a source file. But the preprocessor is not C, and heavy
-use of it results in code which is much harder for others to read and
-harder for the compiler to check for correctness. Heavy preprocessor use
-is almost always a sign of code which needs some cleanup work.
-
-Conditional compilation with #ifdef is, indeed, a powerful feature, and it
-is used within the kernel. But there is little desire to see code which is
-sprinkled liberally with #ifdef blocks. As a general rule, #ifdef use
-should be confined to header files whenever possible.
-Conditionally-compiled code can be confined to functions which, if the code
-is not to be present, simply become empty. The compiler will then quietly
-optimize out the call to the empty function. The result is far cleaner
-code which is easier to follow.
-
-C preprocessor macros present a number of hazards, including possible
-multiple evaluation of expressions with side effects and no type safety.
-If you are tempted to define a macro, consider creating an inline function
-instead. The code which results will be the same, but inline functions are
-easier to read, do not evaluate their arguments multiple times, and allow
-the compiler to perform type checking on the arguments and return value.
-
-
-* Inline functions
-
-Inline functions present a hazard of their own, though. Programmers can
-become enamored of the perceived efficiency inherent in avoiding a function
-call and fill a source file with inline functions. Those functions,
-however, can actually reduce performance. Since their code is replicated
-at each call site, they end up bloating the size of the compiled kernel.
-That, in turn, creates pressure on the processor's memory caches, which can
-slow execution dramatically. Inline functions, as a rule, should be quite
-small and relatively rare. The cost of a function call, after all, is not
-that high; the creation of large numbers of inline functions is a classic
-example of premature optimization.
-
-In general, kernel programmers ignore cache effects at their peril. The
-classic time/space tradeoff taught in beginning data structures classes
-often does not apply to contemporary hardware. Space *is* time, in that a
-larger program will run slower than one which is more compact.
-
-More recent compilers take an increasingly active role in deciding whether
-a given function should actually be inlined or not. So the liberal
-placement of "inline" keywords may not just be excessive; it could also be
-irrelevant.
-
-
-* Locking
-
-In May, 2006, the "Devicescape" networking stack was, with great
-fanfare, released under the GPL and made available for inclusion in the
-mainline kernel. This donation was welcome news; support for wireless
-networking in Linux was considered substandard at best, and the Devicescape
-stack offered the promise of fixing that situation. Yet, this code did not
-actually make it into the mainline until June, 2007 (2.6.22). What
-happened?
-
-This code showed a number of signs of having been developed behind
-corporate doors. But one large problem in particular was that it was not
-designed to work on multiprocessor systems. Before this networking stack
-(now called mac80211) could be merged, a locking scheme needed to be
-retrofitted onto it.
-
-Once upon a time, Linux kernel code could be developed without thinking
-about the concurrency issues presented by multiprocessor systems. Now,
-however, this document is being written on a dual-core laptop. Even on
-single-processor systems, work being done to improve responsiveness will
-raise the level of concurrency within the kernel. The days when kernel
-code could be written without thinking about locking are long past.
-
-Any resource (data structures, hardware registers, etc.) which could be
-accessed concurrently by more than one thread must be protected by a lock.
-New code should be written with this requirement in mind; retrofitting
-locking after the fact is a rather more difficult task. Kernel developers
-should take the time to understand the available locking primitives well
-enough to pick the right tool for the job. Code which shows a lack of
-attention to concurrency will have a difficult path into the mainline.
-
-
-* Regressions
-
-One final hazard worth mentioning is this: it can be tempting to make a
-change (which may bring big improvements) which causes something to break
-for existing users. This kind of change is called a "regression," and
-regressions have become most unwelcome in the mainline kernel. With few
-exceptions, changes which cause regressions will be backed out if the
-regression cannot be fixed in a timely manner. Far better to avoid the
-regression in the first place.
-
-It is often argued that a regression can be justified if it causes things
-to work for more people than it creates problems for. Why not make a
-change if it brings new functionality to ten systems for each one it
-breaks? The best answer to this question was expressed by Linus in July,
-2007:
-
- So we don't fix bugs by introducing new problems. That way lies
- madness, and nobody ever knows if you actually make any real
- progress at all. Is it two steps forwards, one step back, or one
- step forward and two steps back?
-
-(http://lwn.net/Articles/243460/).
-
-An especially unwelcome type of regression is any sort of change to the
-user-space ABI. Once an interface has been exported to user space, it must
-be supported indefinitely. This fact makes the creation of user-space
-interfaces particularly challenging: since they cannot be changed in
-incompatible ways, they must be done right the first time. For this
-reason, a great deal of thought, clear documentation, and wide review for
-user-space interfaces is always required.
-
-
-
-4.2: CODE CHECKING TOOLS
-
-For now, at least, the writing of error-free code remains an ideal that few
-of us can reach. What we can hope to do, though, is to catch and fix as
-many of those errors as possible before our code goes into the mainline
-kernel. To that end, the kernel developers have put together an impressive
-array of tools which can catch a wide variety of obscure problems in an
-automated way. Any problem caught by the computer is a problem which will
-not afflict a user later on, so it stands to reason that the automated
-tools should be used whenever possible.
-
-The first step is simply to heed the warnings produced by the compiler.
-Contemporary versions of gcc can detect (and warn about) a large number of
-potential errors. Quite often, these warnings point to real problems.
-Code submitted for review should, as a rule, not produce any compiler
-warnings. When silencing warnings, take care to understand the real cause
-and try to avoid "fixes" which make the warning go away without addressing
-its cause.
-
-Note that not all compiler warnings are enabled by default. Build the
-kernel with "make EXTRA_CFLAGS=-W" to get the full set.
-
-The kernel provides several configuration options which turn on debugging
-features; most of these are found in the "kernel hacking" submenu. Several
-of these options should be turned on for any kernel used for development or
-testing purposes. In particular, you should turn on:
-
- - ENABLE_WARN_DEPRECATED, ENABLE_MUST_CHECK, and FRAME_WARN to get an
- extra set of warnings for problems like the use of deprecated interfaces
- or ignoring an important return value from a function. The output
- generated by these warnings can be verbose, but one need not worry about
- warnings from other parts of the kernel.
-
- - DEBUG_OBJECTS will add code to track the lifetime of various objects
- created by the kernel and warn when things are done out of order. If
- you are adding a subsystem which creates (and exports) complex objects
- of its own, consider adding support for the object debugging
- infrastructure.
-
- - DEBUG_SLAB can find a variety of memory allocation and use errors; it
- should be used on most development kernels.
-
- - DEBUG_SPINLOCK, DEBUG_ATOMIC_SLEEP, and DEBUG_MUTEXES will find a
- number of common locking errors.
-
-There are quite a few other debugging options, some of which will be
-discussed below. Some of them have a significant performance impact and
-should not be used all of the time. But some time spent learning the
-available options will likely be paid back many times over in short order.
-
-One of the heavier debugging tools is the locking checker, or "lockdep."
-This tool will track the acquisition and release of every lock (spinlock or
-mutex) in the system, the order in which locks are acquired relative to
-each other, the current interrupt environment, and more. It can then
-ensure that locks are always acquired in the same order, that the same
-interrupt assumptions apply in all situations, and so on. In other words,
-lockdep can find a number of scenarios in which the system could, on rare
-occasion, deadlock. This kind of problem can be painful (for both
-developers and users) in a deployed system; lockdep allows them to be found
-in an automated manner ahead of time. Code with any sort of non-trivial
-locking should be run with lockdep enabled before being submitted for
-inclusion.
-
-As a diligent kernel programmer, you will, beyond doubt, check the return
-status of any operation (such as a memory allocation) which can fail. The
-fact of the matter, though, is that the resulting failure recovery paths
-are, probably, completely untested. Untested code tends to be broken code;
-you could be much more confident of your code if all those error-handling
-paths had been exercised a few times.
-
-The kernel provides a fault injection framework which can do exactly that,
-especially where memory allocations are involved. With fault injection
-enabled, a configurable percentage of memory allocations will be made to
-fail; these failures can be restricted to a specific range of code.
-Running with fault injection enabled allows the programmer to see how the
-code responds when things go badly. See
-Documentation/fault-injection/fault-injection.txt for more information on
-how to use this facility.
-
-Other kinds of errors can be found with the "sparse" static analysis tool.
-With sparse, the programmer can be warned about confusion between
-user-space and kernel-space addresses, mixture of big-endian and
-small-endian quantities, the passing of integer values where a set of bit
-flags is expected, and so on. Sparse must be installed separately (it can
-be found at https://sparse.wiki.kernel.org/index.php/Main_Page if your
-distributor does not package it); it can then be run on the code by adding
-"C=1" to your make command.
-
-The "Coccinelle" tool (http://coccinelle.lip6.fr/) is able to find a wide
-variety of potential coding problems; it can also propose fixes for those
-problems. Quite a few "semantic patches" for the kernel have been packaged
-under the scripts/coccinelle directory; running "make coccicheck" will run
-through those semantic patches and report on any problems found. See
-Documentation/coccinelle.txt for more information.
-
-Other kinds of portability errors are best found by compiling your code for
-other architectures. If you do not happen to have an S/390 system or a
-Blackfin development board handy, you can still perform the compilation
-step. A large set of cross compilers for x86 systems can be found at
-
- http://www.kernel.org/pub/tools/crosstool/
-
-Some time spent installing and using these compilers will help avoid
-embarrassment later.
-
-
-4.3: DOCUMENTATION
-
-Documentation has often been more the exception than the rule with kernel
-development. Even so, adequate documentation will help to ease the merging
-of new code into the kernel, make life easier for other developers, and
-will be helpful for your users. In many cases, the addition of
-documentation has become essentially mandatory.
-
-The first piece of documentation for any patch is its associated
-changelog. Log entries should describe the problem being solved, the form
-of the solution, the people who worked on the patch, any relevant
-effects on performance, and anything else that might be needed to
-understand the patch. Be sure that the changelog says *why* the patch is
-worth applying; a surprising number of developers fail to provide that
-information.
-
-Any code which adds a new user-space interface - including new sysfs or
-/proc files - should include documentation of that interface which enables
-user-space developers to know what they are working with. See
-Documentation/ABI/README for a description of how this documentation should
-be formatted and what information needs to be provided.
-
-The file Documentation/kernel-parameters.txt describes all of the kernel's
-boot-time parameters. Any patch which adds new parameters should add the
-appropriate entries to this file.
-
-Any new configuration options must be accompanied by help text which
-clearly explains the options and when the user might want to select them.
-
-Internal API information for many subsystems is documented by way of
-specially-formatted comments; these comments can be extracted and formatted
-in a number of ways by the "kernel-doc" script. If you are working within
-a subsystem which has kerneldoc comments, you should maintain them and add
-them, as appropriate, for externally-available functions. Even in areas
-which have not been so documented, there is no harm in adding kerneldoc
-comments for the future; indeed, this can be a useful activity for
-beginning kernel developers. The format of these comments, along with some
-information on how to create kerneldoc templates can be found in the file
-Documentation/kernel-documentation.rst.
-
-Anybody who reads through a significant amount of existing kernel code will
-note that, often, comments are most notable by their absence. Once again,
-the expectations for new code are higher than they were in the past;
-merging uncommented code will be harder. That said, there is little desire
-for verbosely-commented code. The code should, itself, be readable, with
-comments explaining the more subtle aspects.
-
-Certain things should always be commented. Uses of memory barriers should
-be accompanied by a line explaining why the barrier is necessary. The
-locking rules for data structures generally need to be explained somewhere.
-Major data structures need comprehensive documentation in general.
-Non-obvious dependencies between separate bits of code should be pointed
-out. Anything which might tempt a code janitor to make an incorrect
-"cleanup" needs a comment saying why it is done the way it is. And so on.
-
-
-4.4: INTERNAL API CHANGES
-
-The binary interface provided by the kernel to user space cannot be broken
-except under the most severe circumstances. The kernel's internal
-programming interfaces, instead, are highly fluid and can be changed when
-the need arises. If you find yourself having to work around a kernel API,
-or simply not using a specific functionality because it does not meet your
-needs, that may be a sign that the API needs to change. As a kernel
-developer, you are empowered to make such changes.
-
-There are, of course, some catches. API changes can be made, but they need
-to be well justified. So any patch making an internal API change should be
-accompanied by a description of what the change is and why it is
-necessary. This kind of change should also be broken out into a separate
-patch, rather than buried within a larger patch.
-
-The other catch is that a developer who changes an internal API is
-generally charged with the task of fixing any code within the kernel tree
-which is broken by the change. For a widely-used function, this duty can
-lead to literally hundreds or thousands of changes - many of which are
-likely to conflict with work being done by other developers. Needless to
-say, this can be a large job, so it is best to be sure that the
-justification is solid. Note that the Coccinelle tool can help with
-wide-ranging API changes.
-
-When making an incompatible API change, one should, whenever possible,
-ensure that code which has not been updated is caught by the compiler.
-This will help you to be sure that you have found all in-tree uses of that
-interface. It will also alert developers of out-of-tree code that there is
-a change that they need to respond to. Supporting out-of-tree code is not
-something that kernel developers need to be worried about, but we also do
-not have to make life harder for out-of-tree developers than it needs to
-be.
diff --git a/Documentation/development-process/4.Coding.rst b/Documentation/development-process/4.Coding.rst
new file mode 100644
index 0000000..9d5cef9
--- /dev/null
+++ b/Documentation/development-process/4.Coding.rst
@@ -0,0 +1,413 @@
+.. _development_coding:
+
+Getting the code right
+======================
+
+While there is much to be said for a solid and community-oriented design
+process, the proof of any kernel development project is in the resulting
+code. It is the code which will be examined by other developers and merged
+(or not) into the mainline tree. So it is the quality of this code which
+will determine the ultimate success of the project.
+
+This section will examine the coding process. We'll start with a look at a
+number of ways in which kernel developers can go wrong. Then the focus
+will shift toward doing things right and the tools which can help in that
+quest.
+
+
+Pitfalls
+---------
+
+Coding style
+************
+
+The kernel has long had a standard coding style, described in
+Documentation/CodingStyle. For much of that time, the policies described
+in that file were taken as being, at most, advisory. As a result, there is
+a substantial amount of code in the kernel which does not meet the coding
+style guidelines. The presence of that code leads to two independent
+hazards for kernel developers.
+
+The first of these is to believe that the kernel coding standards do not
+matter and are not enforced. The truth of the matter is that adding new
+code to the kernel is very difficult if that code is not coded according to
+the standard; many developers will request that the code be reformatted
+before they will even review it. A code base as large as the kernel
+requires some uniformity of code to make it possible for developers to
+quickly understand any part of it. So there is no longer room for
+strangely-formatted code.
+
+Occasionally, the kernel's coding style will run into conflict with an
+employer's mandated style. In such cases, the kernel's style will have to
+win before the code can be merged. Putting code into the kernel means
+giving up a degree of control in a number of ways - including control over
+how the code is formatted.
+
+The other trap is to assume that code which is already in the kernel is
+urgently in need of coding style fixes. Developers may start to generate
+reformatting patches as a way of gaining familiarity with the process, or
+as a way of getting their name into the kernel changelogs - or both. But
+pure coding style fixes are seen as noise by the development community;
+they tend to get a chilly reception. So this type of patch is best
+avoided. It is natural to fix the style of a piece of code while working
+on it for other reasons, but coding style changes should not be made for
+their own sake.
+
+The coding style document also should not be read as an absolute law which
+can never be transgressed. If there is a good reason to go against the
+style (a line which becomes far less readable if split to fit within the
+80-column limit, for example), just do it.
+
+
+Abstraction layers
+******************
+
+Computer Science professors teach students to make extensive use of
+abstraction layers in the name of flexibility and information hiding.
+Certainly the kernel makes extensive use of abstraction; no project
+involving several million lines of code could do otherwise and survive.
+But experience has shown that excessive or premature abstraction can be
+just as harmful as premature optimization. Abstraction should be used to
+the level required and no further.
+
+At a simple level, consider a function which has an argument which is
+always passed as zero by all callers. One could retain that argument just
+in case somebody eventually needs to use the extra flexibility that it
+provides. By that time, though, chances are good that the code which
+implements this extra argument has been broken in some subtle way which was
+never noticed - because it has never been used. Or, when the need for
+extra flexibility arises, it does not do so in a way which matches the
+programmer's early expectation. Kernel developers will routinely submit
+patches to remove unused arguments; they should, in general, not be added
+in the first place.
+
+Abstraction layers which hide access to hardware - often to allow the bulk
+of a driver to be used with multiple operating systems - are especially
+frowned upon. Such layers obscure the code and may impose a performance
+penalty; they do not belong in the Linux kernel.
+
+On the other hand, if you find yourself copying significant amounts of code
+from another kernel subsystem, it is time to ask whether it would, in fact,
+make sense to pull out some of that code into a separate library or to
+implement that functionality at a higher level. There is no value in
+replicating the same code throughout the kernel.
+
+
+#ifdef and preprocessor use in general
+**************************************
+
+The C preprocessor seems to present a powerful temptation to some C
+programmers, who see it as a way to efficiently encode a great deal of
+flexibility into a source file. But the preprocessor is not C, and heavy
+use of it results in code which is much harder for others to read and
+harder for the compiler to check for correctness. Heavy preprocessor use
+is almost always a sign of code which needs some cleanup work.
+
+Conditional compilation with #ifdef is, indeed, a powerful feature, and it
+is used within the kernel. But there is little desire to see code which is
+sprinkled liberally with #ifdef blocks. As a general rule, #ifdef use
+should be confined to header files whenever possible.
+Conditionally-compiled code can be confined to functions which, if the code
+is not to be present, simply become empty. The compiler will then quietly
+optimize out the call to the empty function. The result is far cleaner
+code which is easier to follow.
+
+C preprocessor macros present a number of hazards, including possible
+multiple evaluation of expressions with side effects and no type safety.
+If you are tempted to define a macro, consider creating an inline function
+instead. The code which results will be the same, but inline functions are
+easier to read, do not evaluate their arguments multiple times, and allow
+the compiler to perform type checking on the arguments and return value.
+
+
+Inline functions
+****************
+
+Inline functions present a hazard of their own, though. Programmers can
+become enamored of the perceived efficiency inherent in avoiding a function
+call and fill a source file with inline functions. Those functions,
+however, can actually reduce performance. Since their code is replicated
+at each call site, they end up bloating the size of the compiled kernel.
+That, in turn, creates pressure on the processor's memory caches, which can
+slow execution dramatically. Inline functions, as a rule, should be quite
+small and relatively rare. The cost of a function call, after all, is not
+that high; the creation of large numbers of inline functions is a classic
+example of premature optimization.
+
+In general, kernel programmers ignore cache effects at their peril. The
+classic time/space tradeoff taught in beginning data structures classes
+often does not apply to contemporary hardware. Space *is* time, in that a
+larger program will run slower than one which is more compact.
+
+More recent compilers take an increasingly active role in deciding whether
+a given function should actually be inlined or not. So the liberal
+placement of "inline" keywords may not just be excessive; it could also be
+irrelevant.
+
+
+Locking
+*******
+
+In May, 2006, the "Devicescape" networking stack was, with great
+fanfare, released under the GPL and made available for inclusion in the
+mainline kernel. This donation was welcome news; support for wireless
+networking in Linux was considered substandard at best, and the Devicescape
+stack offered the promise of fixing that situation. Yet, this code did not
+actually make it into the mainline until June, 2007 (2.6.22). What
+happened?
+
+This code showed a number of signs of having been developed behind
+corporate doors. But one large problem in particular was that it was not
+designed to work on multiprocessor systems. Before this networking stack
+(now called mac80211) could be merged, a locking scheme needed to be
+retrofitted onto it.
+
+Once upon a time, Linux kernel code could be developed without thinking
+about the concurrency issues presented by multiprocessor systems. Now,
+however, this document is being written on a dual-core laptop. Even on
+single-processor systems, work being done to improve responsiveness will
+raise the level of concurrency within the kernel. The days when kernel
+code could be written without thinking about locking are long past.
+
+Any resource (data structures, hardware registers, etc.) which could be
+accessed concurrently by more than one thread must be protected by a lock.
+New code should be written with this requirement in mind; retrofitting
+locking after the fact is a rather more difficult task. Kernel developers
+should take the time to understand the available locking primitives well
+enough to pick the right tool for the job. Code which shows a lack of
+attention to concurrency will have a difficult path into the mainline.
+
+
+Regressions
+***********
+
+One final hazard worth mentioning is this: it can be tempting to make a
+change (which may bring big improvements) which causes something to break
+for existing users. This kind of change is called a "regression," and
+regressions have become most unwelcome in the mainline kernel. With few
+exceptions, changes which cause regressions will be backed out if the
+regression cannot be fixed in a timely manner. Far better to avoid the
+regression in the first place.
+
+It is often argued that a regression can be justified if it causes things
+to work for more people than it creates problems for. Why not make a
+change if it brings new functionality to ten systems for each one it
+breaks? The best answer to this question was expressed by Linus in July,
+2007:
+
+::
+
+ So we don't fix bugs by introducing new problems. That way lies
+ madness, and nobody ever knows if you actually make any real
+ progress at all. Is it two steps forwards, one step back, or one
+ step forward and two steps back?
+
+(http://lwn.net/Articles/243460/).
+
+An especially unwelcome type of regression is any sort of change to the
+user-space ABI. Once an interface has been exported to user space, it must
+be supported indefinitely. This fact makes the creation of user-space
+interfaces particularly challenging: since they cannot be changed in
+incompatible ways, they must be done right the first time. For this
+reason, a great deal of thought, clear documentation, and wide review for
+user-space interfaces is always required.
+
+
+Code checking tools
+-------------------
+
+For now, at least, the writing of error-free code remains an ideal that few
+of us can reach. What we can hope to do, though, is to catch and fix as
+many of those errors as possible before our code goes into the mainline
+kernel. To that end, the kernel developers have put together an impressive
+array of tools which can catch a wide variety of obscure problems in an
+automated way. Any problem caught by the computer is a problem which will
+not afflict a user later on, so it stands to reason that the automated
+tools should be used whenever possible.
+
+The first step is simply to heed the warnings produced by the compiler.
+Contemporary versions of gcc can detect (and warn about) a large number of
+potential errors. Quite often, these warnings point to real problems.
+Code submitted for review should, as a rule, not produce any compiler
+warnings. When silencing warnings, take care to understand the real cause
+and try to avoid "fixes" which make the warning go away without addressing
+its cause.
+
+Note that not all compiler warnings are enabled by default. Build the
+kernel with "make EXTRA_CFLAGS=-W" to get the full set.
+
+The kernel provides several configuration options which turn on debugging
+features; most of these are found in the "kernel hacking" submenu. Several
+of these options should be turned on for any kernel used for development or
+testing purposes. In particular, you should turn on:
+
+ - ENABLE_WARN_DEPRECATED, ENABLE_MUST_CHECK, and FRAME_WARN to get an
+ extra set of warnings for problems like the use of deprecated interfaces
+ or ignoring an important return value from a function. The output
+ generated by these warnings can be verbose, but one need not worry about
+ warnings from other parts of the kernel.
+
+ - DEBUG_OBJECTS will add code to track the lifetime of various objects
+ created by the kernel and warn when things are done out of order. If
+ you are adding a subsystem which creates (and exports) complex objects
+ of its own, consider adding support for the object debugging
+ infrastructure.
+
+ - DEBUG_SLAB can find a variety of memory allocation and use errors; it
+ should be used on most development kernels.
+
+ - DEBUG_SPINLOCK, DEBUG_ATOMIC_SLEEP, and DEBUG_MUTEXES will find a
+ number of common locking errors.
+
+There are quite a few other debugging options, some of which will be
+discussed below. Some of them have a significant performance impact and
+should not be used all of the time. But some time spent learning the
+available options will likely be paid back many times over in short order.
+
+One of the heavier debugging tools is the locking checker, or "lockdep."
+This tool will track the acquisition and release of every lock (spinlock or
+mutex) in the system, the order in which locks are acquired relative to
+each other, the current interrupt environment, and more. It can then
+ensure that locks are always acquired in the same order, that the same
+interrupt assumptions apply in all situations, and so on. In other words,
+lockdep can find a number of scenarios in which the system could, on rare
+occasion, deadlock. This kind of problem can be painful (for both
+developers and users) in a deployed system; lockdep allows them to be found
+in an automated manner ahead of time. Code with any sort of non-trivial
+locking should be run with lockdep enabled before being submitted for
+inclusion.
+
+As a diligent kernel programmer, you will, beyond doubt, check the return
+status of any operation (such as a memory allocation) which can fail. The
+fact of the matter, though, is that the resulting failure recovery paths
+are, probably, completely untested. Untested code tends to be broken code;
+you could be much more confident of your code if all those error-handling
+paths had been exercised a few times.
+
+The kernel provides a fault injection framework which can do exactly that,
+especially where memory allocations are involved. With fault injection
+enabled, a configurable percentage of memory allocations will be made to
+fail; these failures can be restricted to a specific range of code.
+Running with fault injection enabled allows the programmer to see how the
+code responds when things go badly. See
+Documentation/fault-injection/fault-injection.txt for more information on
+how to use this facility.
+
+Other kinds of errors can be found with the "sparse" static analysis tool.
+With sparse, the programmer can be warned about confusion between
+user-space and kernel-space addresses, mixture of big-endian and
+small-endian quantities, the passing of integer values where a set of bit
+flags is expected, and so on. Sparse must be installed separately (it can
+be found at https://sparse.wiki.kernel.org/index.php/Main_Page if your
+distributor does not package it); it can then be run on the code by adding
+"C=1" to your make command.
+
+The "Coccinelle" tool (http://coccinelle.lip6.fr/) is able to find a wide
+variety of potential coding problems; it can also propose fixes for those
+problems. Quite a few "semantic patches" for the kernel have been packaged
+under the scripts/coccinelle directory; running "make coccicheck" will run
+through those semantic patches and report on any problems found. See
+Documentation/coccinelle.txt for more information.
+
+Other kinds of portability errors are best found by compiling your code for
+other architectures. If you do not happen to have an S/390 system or a
+Blackfin development board handy, you can still perform the compilation
+step. A large set of cross compilers for x86 systems can be found at
+
+ http://www.kernel.org/pub/tools/crosstool/
+
+Some time spent installing and using these compilers will help avoid
+embarrassment later.
+
+
+Documentation
+-------------
+
+Documentation has often been more the exception than the rule with kernel
+development. Even so, adequate documentation will help to ease the merging
+of new code into the kernel, make life easier for other developers, and
+will be helpful for your users. In many cases, the addition of
+documentation has become essentially mandatory.
+
+The first piece of documentation for any patch is its associated
+changelog. Log entries should describe the problem being solved, the form
+of the solution, the people who worked on the patch, any relevant
+effects on performance, and anything else that might be needed to
+understand the patch. Be sure that the changelog says *why* the patch is
+worth applying; a surprising number of developers fail to provide that
+information.
+
+Any code which adds a new user-space interface - including new sysfs or
+/proc files - should include documentation of that interface which enables
+user-space developers to know what they are working with. See
+Documentation/ABI/README for a description of how this documentation should
+be formatted and what information needs to be provided.
+
+The file Documentation/kernel-parameters.txt describes all of the kernel's
+boot-time parameters. Any patch which adds new parameters should add the
+appropriate entries to this file.
+
+Any new configuration options must be accompanied by help text which
+clearly explains the options and when the user might want to select them.
+
+Internal API information for many subsystems is documented by way of
+specially-formatted comments; these comments can be extracted and formatted
+in a number of ways by the "kernel-doc" script. If you are working within
+a subsystem which has kerneldoc comments, you should maintain them and add
+them, as appropriate, for externally-available functions. Even in areas
+which have not been so documented, there is no harm in adding kerneldoc
+comments for the future; indeed, this can be a useful activity for
+beginning kernel developers. The format of these comments, along with some
+information on how to create kerneldoc templates can be found in the file
+Documentation/kernel-documentation.rst.
+
+Anybody who reads through a significant amount of existing kernel code will
+note that, often, comments are most notable by their absence. Once again,
+the expectations for new code are higher than they were in the past;
+merging uncommented code will be harder. That said, there is little desire
+for verbosely-commented code. The code should, itself, be readable, with
+comments explaining the more subtle aspects.
+
+Certain things should always be commented. Uses of memory barriers should
+be accompanied by a line explaining why the barrier is necessary. The
+locking rules for data structures generally need to be explained somewhere.
+Major data structures need comprehensive documentation in general.
+Non-obvious dependencies between separate bits of code should be pointed
+out. Anything which might tempt a code janitor to make an incorrect
+"cleanup" needs a comment saying why it is done the way it is. And so on.
+
+
+Internal API changes
+--------------------
+
+The binary interface provided by the kernel to user space cannot be broken
+except under the most severe circumstances. The kernel's internal
+programming interfaces, instead, are highly fluid and can be changed when
+the need arises. If you find yourself having to work around a kernel API,
+or simply not using a specific functionality because it does not meet your
+needs, that may be a sign that the API needs to change. As a kernel
+developer, you are empowered to make such changes.
+
+There are, of course, some catches. API changes can be made, but they need
+to be well justified. So any patch making an internal API change should be
+accompanied by a description of what the change is and why it is
+necessary. This kind of change should also be broken out into a separate
+patch, rather than buried within a larger patch.
+
+The other catch is that a developer who changes an internal API is
+generally charged with the task of fixing any code within the kernel tree
+which is broken by the change. For a widely-used function, this duty can
+lead to literally hundreds or thousands of changes - many of which are
+likely to conflict with work being done by other developers. Needless to
+say, this can be a large job, so it is best to be sure that the
+justification is solid. Note that the Coccinelle tool can help with
+wide-ranging API changes.
+
+When making an incompatible API change, one should, whenever possible,
+ensure that code which has not been updated is caught by the compiler.
+This will help you to be sure that you have found all in-tree uses of that
+interface. It will also alert developers of out-of-tree code that there is
+a change that they need to respond to. Supporting out-of-tree code is not
+something that kernel developers need to be worried about, but we also do
+not have to make life harder for out-of-tree developers than it needs to
+be.
diff --git a/Documentation/development-process/5.Posting b/Documentation/development-process/5.Posting
deleted file mode 100644
index 8a48c9b..0000000
--- a/Documentation/development-process/5.Posting
+++ /dev/null
@@ -1,307 +0,0 @@
-5: POSTING PATCHES
-
-Sooner or later, the time comes when your work is ready to be presented to
-the community for review and, eventually, inclusion into the mainline
-kernel. Unsurprisingly, the kernel development community has evolved a set
-of conventions and procedures which are used in the posting of patches;
-following them will make life much easier for everybody involved. This
-document will attempt to cover these expectations in reasonable detail;
-more information can also be found in the files SubmittingPatches,
-SubmittingDrivers, and SubmitChecklist in the kernel documentation
-directory.
-
-
-5.1: WHEN TO POST
-
-There is a constant temptation to avoid posting patches before they are
-completely "ready." For simple patches, that is not a problem. If the
-work being done is complex, though, there is a lot to be gained by getting
-feedback from the community before the work is complete. So you should
-consider posting in-progress work, or even making a git tree available so
-that interested developers can catch up with your work at any time.
-
-When posting code which is not yet considered ready for inclusion, it is a
-good idea to say so in the posting itself. Also mention any major work
-which remains to be done and any known problems. Fewer people will look at
-patches which are known to be half-baked, but those who do will come in
-with the idea that they can help you drive the work in the right direction.
-
-
-5.2: BEFORE CREATING PATCHES
-
-There are a number of things which should be done before you consider
-sending patches to the development community. These include:
-
- - Test the code to the extent that you can. Make use of the kernel's
- debugging tools, ensure that the kernel will build with all reasonable
- combinations of configuration options, use cross-compilers to build for
- different architectures, etc.
-
- - Make sure your code is compliant with the kernel coding style
- guidelines.
-
- - Does your change have performance implications? If so, you should run
- benchmarks showing what the impact (or benefit) of your change is; a
- summary of the results should be included with the patch.
-
- - Be sure that you have the right to post the code. If this work was done
- for an employer, the employer likely has a right to the work and must be
- agreeable with its release under the GPL.
-
-As a general rule, putting in some extra thought before posting code almost
-always pays back the effort in short order.
-
-
-5.3: PATCH PREPARATION
-
-The preparation of patches for posting can be a surprising amount of work,
-but, once again, attempting to save time here is not generally advisable
-even in the short term.
-
-Patches must be prepared against a specific version of the kernel. As a
-general rule, a patch should be based on the current mainline as found in
-Linus's git tree. When basing on mainline, start with a well-known release
-point - a stable or -rc release - rather than branching off the mainline at
-an arbitrary spot.
-
-It may become necessary to make versions against -mm, linux-next, or a
-subsystem tree, though, to facilitate wider testing and review. Depending
-on the area of your patch and what is going on elsewhere, basing a patch
-against these other trees can require a significant amount of work
-resolving conflicts and dealing with API changes.
-
-Only the most simple changes should be formatted as a single patch;
-everything else should be made as a logical series of changes. Splitting
-up patches is a bit of an art; some developers spend a long time figuring
-out how to do it in the way that the community expects. There are a few
-rules of thumb, however, which can help considerably:
-
- - The patch series you post will almost certainly not be the series of
- changes found in your working revision control system. Instead, the
- changes you have made need to be considered in their final form, then
- split apart in ways which make sense. The developers are interested in
- discrete, self-contained changes, not the path you took to get to those
- changes.
-
- - Each logically independent change should be formatted as a separate
- patch. These changes can be small ("add a field to this structure") or
- large (adding a significant new driver, for example), but they should be
- conceptually small and amenable to a one-line description. Each patch
- should make a specific change which can be reviewed on its own and
- verified to do what it says it does.
-
- - As a way of restating the guideline above: do not mix different types of
- changes in the same patch. If a single patch fixes a critical security
- bug, rearranges a few structures, and reformats the code, there is a
- good chance that it will be passed over and the important fix will be
- lost.
-
- - Each patch should yield a kernel which builds and runs properly; if your
- patch series is interrupted in the middle, the result should still be a
- working kernel. Partial application of a patch series is a common
- scenario when the "git bisect" tool is used to find regressions; if the
- result is a broken kernel, you will make life harder for developers and
- users who are engaging in the noble work of tracking down problems.
-
- - Do not overdo it, though. One developer once posted a set of edits
- to a single file as 500 separate patches - an act which did not make him
- the most popular person on the kernel mailing list. A single patch can
- be reasonably large as long as it still contains a single *logical*
- change.
-
- - It can be tempting to add a whole new infrastructure with a series of
- patches, but to leave that infrastructure unused until the final patch
- in the series enables the whole thing. This temptation should be
- avoided if possible; if that series adds regressions, bisection will
- finger the last patch as the one which caused the problem, even though
- the real bug is elsewhere. Whenever possible, a patch which adds new
- code should make that code active immediately.
-
-Working to create the perfect patch series can be a frustrating process
-which takes quite a bit of time and thought after the "real work" has been
-done. When done properly, though, it is time well spent.
-
-
-5.4: PATCH FORMATTING AND CHANGELOGS
-
-So now you have a perfect series of patches for posting, but the work is
-not done quite yet. Each patch needs to be formatted into a message which
-quickly and clearly communicates its purpose to the rest of the world. To
-that end, each patch will be composed of the following:
-
- - An optional "From" line naming the author of the patch. This line is
- only necessary if you are passing on somebody else's patch via email,
- but it never hurts to add it when in doubt.
-
- - A one-line description of what the patch does. This message should be
- enough for a reader who sees it with no other context to figure out the
- scope of the patch; it is the line that will show up in the "short form"
- changelogs. This message is usually formatted with the relevant
- subsystem name first, followed by the purpose of the patch. For
- example:
-
- gpio: fix build on CONFIG_GPIO_SYSFS=n
-
- - A blank line followed by a detailed description of the contents of the
- patch. This description can be as long as is required; it should say
- what the patch does and why it should be applied to the kernel.
-
- - One or more tag lines, with, at a minimum, one Signed-off-by: line from
- the author of the patch. Tags will be described in more detail below.
-
-The items above, together, form the changelog for the patch. Writing good
-changelogs is a crucial but often-neglected art; it's worth spending
-another moment discussing this issue. When writing a changelog, you should
-bear in mind that a number of different people will be reading your words.
-These include subsystem maintainers and reviewers who need to decide
-whether the patch should be included, distributors and other maintainers
-trying to decide whether a patch should be backported to other kernels, bug
-hunters wondering whether the patch is responsible for a problem they are
-chasing, users who want to know how the kernel has changed, and more. A
-good changelog conveys the needed information to all of these people in the
-most direct and concise way possible.
-
-To that end, the summary line should describe the effects of and motivation
-for the change as well as possible given the one-line constraint. The
-detailed description can then amplify on those topics and provide any
-needed additional information. If the patch fixes a bug, cite the commit
-which introduced the bug if possible (and please provide both the commit ID
-and the title when citing commits). If a problem is associated with
-specific log or compiler output, include that output to help others
-searching for a solution to the same problem. If the change is meant to
-support other changes coming in later patch, say so. If internal APIs are
-changed, detail those changes and how other developers should respond. In
-general, the more you can put yourself into the shoes of everybody who will
-be reading your changelog, the better that changelog (and the kernel as a
-whole) will be.
-
-Needless to say, the changelog should be the text used when committing the
-change to a revision control system. It will be followed by:
-
- - The patch itself, in the unified ("-u") patch format. Using the "-p"
- option to diff will associate function names with changes, making the
- resulting patch easier for others to read.
-
-You should avoid including changes to irrelevant files (those generated by
-the build process, for example, or editor backup files) in the patch. The
-file "dontdiff" in the Documentation directory can help in this regard;
-pass it to diff with the "-X" option.
-
-The tags mentioned above are used to describe how various developers have
-been associated with the development of this patch. They are described in
-detail in the SubmittingPatches document; what follows here is a brief
-summary. Each of these lines has the format:
-
- tag: Full Name <email address> optional-other-stuff
-
-The tags in common use are:
-
- - Signed-off-by: this is a developer's certification that he or she has
- the right to submit the patch for inclusion into the kernel. It is an
- agreement to the Developer's Certificate of Origin, the full text of
- which can be found in Documentation/SubmittingPatches. Code without a
- proper signoff cannot be merged into the mainline.
-
- - Acked-by: indicates an agreement by another developer (often a
- maintainer of the relevant code) that the patch is appropriate for
- inclusion into the kernel.
-
- - Tested-by: states that the named person has tested the patch and found
- it to work.
-
- - Reviewed-by: the named developer has reviewed the patch for correctness;
- see the reviewer's statement in Documentation/SubmittingPatches for more
- detail.
-
- - Reported-by: names a user who reported a problem which is fixed by this
- patch; this tag is used to give credit to the (often underappreciated)
- people who test our code and let us know when things do not work
- correctly.
-
- - Cc: the named person received a copy of the patch and had the
- opportunity to comment on it.
-
-Be careful in the addition of tags to your patches: only Cc: is appropriate
-for addition without the explicit permission of the person named.
-
-
-5.5: SENDING THE PATCH
-
-Before you mail your patches, there are a couple of other things you should
-take care of:
-
- - Are you sure that your mailer will not corrupt the patches? Patches
- which have had gratuitous white-space changes or line wrapping performed
- by the mail client will not apply at the other end, and often will not
- be examined in any detail. If there is any doubt at all, mail the patch
- to yourself and convince yourself that it shows up intact.
-
- Documentation/email-clients.txt has some helpful hints on making
- specific mail clients work for sending patches.
-
- - Are you sure your patch is free of silly mistakes? You should always
- run patches through scripts/checkpatch.pl and address the complaints it
- comes up with. Please bear in mind that checkpatch.pl, while being the
- embodiment of a fair amount of thought about what kernel patches should
- look like, is not smarter than you. If fixing a checkpatch.pl complaint
- would make the code worse, don't do it.
-
-Patches should always be sent as plain text. Please do not send them as
-attachments; that makes it much harder for reviewers to quote sections of
-the patch in their replies. Instead, just put the patch directly into your
-message.
-
-When mailing patches, it is important to send copies to anybody who might
-be interested in it. Unlike some other projects, the kernel encourages
-peop