blob: fa47ae835962cc9487b230b966afcf7820f71a35 [file] [log] [blame]
import gettext
_ = gettext.gettext
# gettext_noop
def N_(s):
return s
KTHREAD_HELP = {
'kthreadd':N_('Used to create kernel threads via kthread_create(). It is the parent of all the other kernel threads.'),
'posix_cpu_timer':N_('Per-cpu thread that handles POSIX timer callbacks. Timer callbacks are bound to a cpu and are handled by these threads as per-cpu data.'),
'kondemand/':N_('This cpufreq workqueue runs periodically to sample the idleness of the system, increasing or reducing the CPU frequency to save power. \n[One per CPU]'),
'sirq-rcu/':N_('Pushes the RCU grace period along (if possible) and will handle dereferenced RCU callbacks, such as freeing structures after a grace period. \n[One per CPU]'),
'khelper':N_('Used to call user mode helpers from the kernel, such as /sbin/bridge-stp, ocfs2_hb_ctl, pnpbios, poweroff, request-key, etc."'),
'group_balance':N_('Scheduler load balance monitoring.'),
'kjournald':N_('Main thread function used to manage a filesystem logging device journal. This kernel thread is responsible for two things: <b>COMMIT</b>: Every so often we need to commit the current state of the filesystem to disk. The journal thread is responsible for writing all of the metadata buffers to disk. <b>CHECKPOINT</b>: We cannot reuse a used section of the log file until all of the data in that part of the log has been rewritten elsewhere on the disk. Flushing these old buffers to reclaim space in the log is known as checkpointing, and this thread is responsible for that job.'),
'lockd':N_('Locking arbiter for NFS on the system'),
'sirq-sched/':N_('Triggered when a rebalance of tasks is needed to CPU domains. This handles balancing of SCHED_OTHER tasks across CPUs. RT tasks balancing is done directly in schedule and wakeup paths. Runs at prio 1 because it needs to schedule above all SCHED_OTHER tasks. If the user has the same issue but doesn"t mind having latencies against other kernel threads that run here, then its fine. But it should definitely be documented that PRIO 1 has other threads on it at boot up. \n[One per CPU]'),
'sirq-high/':N_('This is from a poor attempt to prioritize tasklets. Some tasklets wanted to run before anything else. Thus there were two tasklet softirqs made. tasklet_vec and tasklet_hi_vec. A driver writer could put their "critical" tasklets into the tasklet_hi_vec and it would run before other softirqs. This never really worked as intended. \n[One per CPU]'),
'kblockd/':N_('Workqueue used to process IO requests. Used by IO schedulers and block device drivers. \n[One per CPU]'),
'sirq-net-rx/':N_('When receiving a packet the device will place the packet on a queue with its hard interrupt (threaded in RT). The sirq-net-rx is responsible for finding out what to do with the packet. It may forward it to another box if the current box is used as a router, or it will find the task the packet is for. If that task is currently waiting for the packet, the softirq might hand it off to that task and the task will handle the rest of the processing of the packet. \n[One per CPU]'),
'krcupreemptd':N_('This should run at the lowest RT priority. With preemptible RCU, a loaded system may have tasks that hold RCU locks but have a high nice value. These tasks may be pushed off for seconds, and if the system is tight on memory, the RCU deferred freeing may not occur. The result can be drastic. The krcupreemptd is a daemon that runs just above SCHED_OTHER and wakes up once a second and performs a synchronize RCU. With RCU boosting, all those that hold RCU locks will inherit the priority of the krcupreemptd and wake up and release the RCU locks. This is only a concern for loaded systems and SCHED_OTHER tasks. If there is an issue of RT tasks starving out SCHED_OTHER tasks and causing problems with freeing memory, then the RT tasks are designed badly.'),
'ksoftirqd/':N_('Activated when under heavy networking activity. Used to avoid monopolizing the CPUs doing just software interrupt processing. \n[One per CPU]'),
'sirq-timer/':N_('Basically the timer wheel. Things that add itself to the timer wheel timeouts will be handled by this softirq. Parts of the kernel that need timeouts will use this softirq (i.e. network timeouts). The resolution to these timeouts are defined by the HZ value. \n[One per CPU]'),
'events/':N_('Global workqueue, used to schedule work to be done in process context. \n[One per CPU]'),
'watchdog/':N_('Run briefly once per second to reset the softlockup timestamp. If this gets delayed for more than 60 seconds then a message will be printed. Use /proc/sys/kernel/hung_task_timeout_secs and /proc/sys/kernel/hung_task_check_count to control this behaviour. Setting /proc/sys/kernel/hung_task_timeout_secs to zero will disable this check. \n[One per CPU]'),
'sirq-net-tx/':N_('This is the network transmit queue. Most of the time the network packets will be handled by the task that is sending the packets out, and doing so at the priority of that task. But if the protocol window or the network device queue is full, then the packets will be pushed off to later. The sirq-net-tx softirq is responsible for sending out these packets. \n[One per CPU]'),
'sirq-block/':N_('Called after a completion to a block device is made. Looking further into this call, I only see a couple of users. The SCSI driver uses this as well as cciss. \n[One per CPU]'),
'sirq-tasklet/':N_('Catch all for those devices that couldn"t use softirqs directly and mostly made before work queues were around. The difference between a tasklet and a softirq is that the same tasklet can not run on two different CPUs at the same time. In this regard it acts like a "task" (hence the name "tasklet"). Various devices use tasklets. \n[One per CPU]'),
'usb-storage':N_('Per USB storage device virtual SCSI controller. Persistant across device insertion/removal, as is the SCSI node. This is done so that a device which is removed can be re-attached and be granted the same /dev node as before, creating persistance between connections of the target unit. Gets commands from the SCSI mid-layer and, after sanity checking several things, sends the command to the "protocol" handler. This handler is responsible for re-writing the command (if necessary) into a form which the device will accept. For example, ATAPI devices do not support 6-byte commands. Thus, they must be re-written into 10-byte variants.'),
'migration/':N_('High priority system thread that performs thread migration by bumping thread off CPU then pushing onto another runqueue. \n[One per CPU]'),
'rpciod/':N_('Handles Sun RPC network messages (mainly for NFS) \n[One per CPU]')
}
PROC_SYS_HELP = {
'net.core.bpf_jit_enable':N_('This enables Berkeley Packet Filter Just in Time compiler. Currently supported on x86_64 architecture, bpf_jit provides a framework to speed packet filtering, the one used by tcpdump/libpcap for example.Values:\n - 0 -disable the JIT (default value)\n - 1 - enable the JIT\n - 2 - enable the JIT and ask the compiler to emit traces on kernel log.'),
'net.core.dev_weight':N_('The maximum number of packets that kernel can handle on a NAPI interrupt, it\'s a Per-CPU variable.\nDefault: 64'),
'net.core.message_burst':N_('This parameter are used to limit the warning messages written to the kernel log from the networking code. They enforce a rate limit to make a denial-of-service attack impossible. Message_burst controls when messages will be dropped. The default settings limit warning messages to one every five seconds.'),
'net.core.message_cost':N_('This parameter are used to limit the warning messages written to the kernel log from the networking code. They enforce a rate limit to make a denial-of-service attack impossible. A higher message_cost factor, results in fewer messages that will be written. The default settings limit warning messages to one every five seconds.'),
'net.core.netdev_budget':N_('Maximum number of packets taken from all interfaces in one polling cycle (NAPI poll). In one polling cycle interfaces which are registered to polling are probed in a round-robin manner. The limit of packets in one such probe can be set per-device via sysfs class/net/<device>/weight.'),
'net.core.netdev_max_backlog':N_('Maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them.'),
'net.core.netdev_tstamp_prequeue':N_('If set to 0, RX packet timestamps can be sampled after RPS processing, when the target CPU processes packets. It might give some delay on timestamps, but permit to distribute the load on several cpus.\nIf set to 1 (default), timestamps are sampled as soon as possible, before queueing.'),
'net.core.optmem_max':N_('Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence of struct cmsghdr structures with appended data.'),
'net.core.rmem_default':N_('The default setting of the socket receive buffer in bytes.'),
'net.core.rmem_max':N_('The maximum receive socket buffer size in bytes.'),
'net.core.rps_sock_flow_entries':N_('This controls the maximum number of sockets/flows that the kernel can steer towards any specified CPU. This is a system-wide, shared limit.'),
'net.core.somaxconn':N_('Limit of socket listen() backlog, known in userspace as SOMAXCONN. See also tcp_max_syn_backlog for additional tuning for TCP sockets.\Default: 128.'),
'net.core.warnings':N_('This controls console messages from the networking stack that can occur because of problems on the network like duplicate address or bad checksums. Normally, this should be enabled, but if the problem persists the messages can be disabled.'),
'net.core.wmem_default':N_('The default setting (in bytes) of the socket send buffer.'),
'net.core.wmem_max':N_('The maximum send socket buffer size in bytes.'),
'net.core.xfrm_acq_expires':N_('Hard timeout in seconds for acquire requests'),
#'net.core.xfrm_aevent_etime':N_(''),
#'net.core.xfrm_aevent_rseqth':N_(''),
#'net.core.xfrm_larval_drop':N_(''),
'net.ipv4.cipso_cache_bucket_size':N_('The CIPSO label cache consists of a fixed size hash table with each hash bucket containing a number of cache entries. This variable limits the number of entries in each hash bucket; the larger the value the more CIPSO label mappings that can be cached. When the number of entries in a given hash bucket reaches this limit adding new entries causes the oldest entry in the bucket to be removed to make room.\nDefault: 10'),
'net.ipv4.cipso_cache_enable':N_('If set, enable additions to and lookups from the CIPSO label mapping cache. If unset, additions are ignored and lookups always result in a miss. However, regardless of the setting the cache is still invalidated when required when means you can safely toggle this on and off and the cache will always be "safe".\nDefault: 1'),
'net.ipv4.cipso_rbm_optfmt':N_('Enable the "Optimized Tag 1 Format" This means that when set the CIPSO tag will be padded with empty categories in order to make the packet data 32-bit aligned.\nDefault: 0'),
'net.ipv4.cipso_rbm_strictvalid':N_('If set, do a very strict check of the CIPSO option when ip_options_compile() is called. If unset, relax the checks done during ip_options_compile(). Either way is "safe" as errors are caught else where in the CIPSO processing code but setting this to 0 (False) should result in less work (i.e. it should be faster) but could cause problems with other implementations that require strict checking.\nDefault: 0'),
'net.ipv4.icmp_echo_ignore_all':N_('If set non-zero, then the kernel will ignore all ICMP ECHO requests sent to it.\nDefault: 0'),
'net.ipv4.icmp_echo_ignore_broadcasts':N_('If set non-zero, then the kernel will ignore all ICMP ECHO and TIMESTAMP requests sent to it via broadcast/multicast.\nDefault: 1'),
'net.ipv4.icmp_errors_use_inbound_ifaddr':N_('If zero, icmp error messages are sent with the primary address of the exiting interface.\nIf non-zero, the message will be sent with the primary address of the interface that received the packet that caused the icmp error. This is the behaviour network many administrators will expect from a router. And it can make debugging complicated network layouts much easier. Note that if no primary address exists for the interface selected, then the primary address of the first non-loopback interface that has one will be used regardless of this setting.\nDefault: 0'),
'net.ipv4.icmp_ignore_bogus_error_responses':N_('Some routers violate RFC1122 by sending bogus responses to broadcast frames. Such violations are normally logged via a kernel warning. If this is set to TRUE, the kernel will not give such warnings, which will avoid log file clutter.\nDefault: FALSE'),
'net.ipv4.icmp_ratelimit':N_('Limit the maximal rates for sending ICMP packets whose type matches icmp_ratemask (see below) to specific targets. 0 to disable any limiting, otherwise the minimal space between responses in milliseconds.\nDefault: 1000'),
'net.ipv4.icmp_ratemask':N_('Mask made of ICMP types for which rates are being limited.\n - Significant bits: IHGFEDCBA9876543210\n - Default mask: 0000001100000011000 (6168)\n - - 0 Echo Reply\n - - 3 Destination Unreachable *\n - - 4 Source Quench *\n - - 5 Redirect\n - - 8 Echo Request\n - - B Time Exceeded *\n - - C Parameter Problem *\n - - D Timestamp Request\n - - E Timestamp Reply\n - - F Info Request\n - - G Info Reply\n - - H Address Mask Request\n - - I Address Mask Reply\nDefault: values rate by *'),
'net.ipv4.igmp_max_memberships':N_('Change the maximum number of multicast groups we can subscribe to.\nDefault: 20'),
#'net.ipv4.igmp_max_msf':N_(''),
'net.ipv4.inet_peer_maxttl':N_('Maximum time-to-live of entries. Unused entries will expire after this period of time if there is no memory pressure on the pool (i.e. when the number of entries in the pool is very small).\nUnit: second'),
'net.ipv4.inet_peer_minttl':N_('Minimum time-to-live of entries. Should be enough to cover fragment time-to-live on the reassembling side. This minimum time-to-live is guaranteed if the pool size is less than inet_peer_threshold.\nUnit: second'),
'net.ipv4.inet_peer_threshold':N_('The approximate size of the storage. Starting from this threshold entries will be thrown aggressively. This threshold also determines entries time-to-live and time intervals between garbage collection passes. More entries, less time-to-live, less GC interval.'),
'net.ipv4.ip_default_ttl':N_('Default value of TTL field (Time To Live) for outgoing (but not forwarded) IP packets. Should be between 1 and 255 inclusive.\nDefault: 64 (as recommended by RFC1700)'),
'net.ipv4.ip_dynaddr':N_('If set non-zero, enables support for dynamic addresses. If set to a non-zero value larger than 1, a kernel log message will be printed when dynamic address rewriting occurs.\nDefault: 0'),
#'net.ipv4.ip_early_demux':N_(''),
'net.ipv4.ip_forward':N_('Forward Packets between interfaces.\nO - disabled (default)\nnot 0 - enabled'),
'net.ipv4.ipfrag_high_thresh':N_('Maximum memory used to reassemble IP fragments. When ipfrag_high_thresh bytes of memory is allocated for this purpose, the fragment handler will toss packets until ipfrag_low_thresh is reached.'),
'net.ipv4.ipfrag_low_thresh':N_('Minimum memory used to reassemble IP fragments. When ipfrag_high_thresh bytes of memory is allocated for this purpose, the fragment handler will toss packets until ipfrag_low_thresh is reached.'),
'net.ipv4.ipfrag_max_dist':N_('Ipfrag_max_dist is a non-negative integer value which defines the maximum "disorder" which is allowed among fragments which share a common IP source address. Note that reordering of packets is not unusual, but if a large number of fragments arrive from a source IP address while a particular fragment queue remains incomplete, it probably indicates that one or more fragments belonging to that queue have been lost. When ipfrag_max_dist is positive, an additional check is done on fragments before they are added to a reassembly queue - if ipfrag_max_dist (or more) fragments have arrived from a particular IP address between additions to any IP fragment queue using that source address, it\'s presumed that one or more fragments in the queue are lost. The existing fragment queue will be dropped, and a new one started. An ipfrag_max_dist value of zero disables this check.\n\nUsing a very small value, e.g. 1 or 2, for ipfrag_max_dist can result in unnecessarily dropping fragment queues when normal reordering of packets occurs, which could lead to poor application performance. Using a very large value, e.g. 50000, increases the likelihood of incorrectly reassembling IP fragments that originate from different IP datagrams, which could result in data corruption.\nDefault: 64'),
'net.ipv4.ipfrag_secret_interval':N_('Regeneration interval of the hash secret (or lifetime for the hash secret) for IP fragments.\nDefault: 600\nUnit: second'),
'net.ipv4.ipfrag_time':N_('Time to keep an IP fragment in memory.\nUnit: second'),
'net.ipv4.ip_local_port_range':N_('Defines the local port range that is used by TCP and UDP to choose the local port. The first number is the first, the second the last local port number.\nDefault value depends on amount of memory available on the system:\n - > 128Mb 32768-61000\n - < 128Mb 1024-4999 or even less.\nThis number defines number of active connections, which this system can issue simultaneously to systems not supporting TCP extensions (timestamps). With tcp_tw_recycle enabled (i.e. by default) range 1024-4999 is enough to issue up to 2000 connections per second to systems supporting timestamps.'),
'net.ipv4.ip_local_reserved_ports':N_('Specify the ports which are reserved for known third-party applications. These ports will not be used by automatic port assignments (e.g. when calling connect() or bind() with port number 0). Explicit port allocation behavior is unchanged.\nThe format used for both input and output is a comma separated list of ranges (e.g. "1,2-4,10-10" for ports 1, 2, 3, 4 and 10). Writing to the file will clear all previously reserved ports and update the current list with the one given in the input.\nDefault: Empty'),
'net.ipv4.ip_nonlocal_bind':N_('If set, allows processes to bind() to non-local IP addresses, which can be quite useful - but may break some applications.\nDefault: 0'),
'net.ipv4.ip_no_pmtu_disc':N_('Disable Path MTU Discovery.\nDefault FALSE'),
#'net.ipv4.ping_group_range':N_(''),
'net.ipv4.tcp_abc':N_('Controls Appropriate Byte Count (ABC) defined in RFC3465. ABC is a way of increasing congestion window (cwnd) more slowly in response to partial acknowledgments.\nPossible values are:\n - 0 increase cwnd once per acknowledgment (no ABC)\n - 1 increase cwnd once per acknowledgment of full sized segment\n - 2 allow increase cwnd by two if acknowledgment is of two segments to compensate for delayed acknowledgments.\nDefault: 0 (off)'),
'net.ipv4.tcp_abort_on_overflow':N_('If listening service is too slow to accept new connections, reset them. Default state is FALSE. It means that if overflow occurred due to a burst, connection will recover. Enable this option _only_ if you are really sure that listening daemon cannot be tuned to accept connections faster. Enabling this option can harm clients of your server.\nDefault: FALSE.'),
'net.ipv4.tcp_adv_win_scale':N_('Count buffering overhead as bytes/2^tcp_adv_win_scale (if tcp_adv_win_scale > 0) or bytes-bytes/2^(-tcp_adv_win_scale), if it is <= 0. Possible values are [-31, 31], inclusive.\nDefault: 2'),
'net.ipv4.tcp_allowed_congestion_control':N_('Show/set the congestion control choices available to non-privileged processes. The list is a subset of those listed in tcp_available_congestion_control. Default is "reno" and the default setting (tcp_congestion_control).'),
'net.ipv4.tcp_app_win':N_('Reserve max(window/2^tcp_app_win, mss) of window for application buffer. Value 0 is special, it means that nothing is reserved.\nDefault: 31'),
'net.ipv4.tcp_available_congestion_control':N_('Shows the available congestion control choices that are registered. More congestion control algorithms may be available as modules, but not loaded.'),
'net.ipv4.tcp_base_mss':N_('The initial value of search_low to be used by the packetization layer Path MTU discovery (MTU probing). If MTU probing is enabled, this is the initial MSS used by the connection.'),
'net.ipv4.tcp_congestion_control':N_('Set the congestion control algorithm to be used for new connections. The algorithm "reno" is always available, but additional choices may be available based on kernel configuration. Default is set as part of kernel configuration.'),
'net.ipv4.tcp_cookie_size':N_('Default size of TCP Cookie Transactions (TCPCT) option, that may be overridden on a per socket basis by the TCPCT socket option. Values greater than the maximum (16) are interpreted as the maximum. Values greater than zero and less than the minimum (8) are interpreted as the minimum. Odd values are interpreted as the next even value.\nDefault: 0 (off).'),
'net.ipv4.tcp_dma_copybreak':N_('Lower limit, in bytes, of the size of socket reads that will be offloaded to a DMA copy engine, if one is present in the system and CONFIG_NET_DMA is enabled.\nDefault: 4096'),
'net.ipv4.tcp_dsack':N_('Allows TCP to send "duplicate" SACKs.'),
#'net.ipv4.tcp_early_retrans':N_(''),
'net.ipv4.tcp_ecn':N_('Enable Explicit Congestion Notification (ECN) in TCP. ECN is only used when both ends of the TCP flow support it. It is useful to avoid losses due to congestion (when the bottleneck router supports ECN).\nPossible values are:\n - 0 disable ECN\n - 1 ECN enabled\n - 2 Only server-side ECN enabled. If the other end doesnot support ECN, behavior is like with ECN disabled.\nDefault: 2'),
'net.ipv4.tcp_fack':N_('Enable FACK congestion avoidance and fast retransmission. The value is not used, if tcp_sack is not enabled.'),
#'net.ipv4.tcp_fastopen':N_(''),
#'net.ipv4.tcp_fastopen_key':N_(''),
'net.ipv4.tcp_fin_timeout':N_('Time to hold socket in state FIN-WAIT-2, if it was closed by our side. Peer can be broken and never close its side, or even died unexpectedly. Usual value used in 2.2 was 180 seconds, you may restore it, but remember that if your machine is even underloaded WEB server, you risk to overflow memory with kilotons of dead sockets, FIN-WAIT-2 sockets are less dangerous than FIN-WAIT-1, because they eat maximum 1.5K of memory, but they tend to live longer.\nDefault: 60\nUnit: second'),
'net.ipv4.tcp_frto':N_('Enables Forward RTO-Recovery (F-RTO) defined in RFC4138. F-RTO is an enhanced recovery algorithm for TCP retransmission timeouts. It is particularly beneficial in wireless environments where packet loss is typically due to random radio interference rather than intermediate router congestion. F-RTO is sender-side only modification. Therefore it does not require any support from the peer.\n If set to 1, basic version is enabled. 2 enables SACK enhanced F-RTO if flow uses SACK. The basic version can be used also when SACK is in use though scenario(s) with it exists where F-RTO interacts badly with the packet counting of the SACK enabled TCP flow.'),
'net.ipv4.tcp_frto_response':N_('When F-RTO has detected that a TCP retransmission timeout was spurious (i.e, the timeout would have been avoided had TCP set a longer retransmission timeout), TCP has several options what to do next.\nPossible values are:\n - 0 Rate halving based; a smooth and conservative response, results in halved cwnd and ssthresh after one RTT\n - 1 Very conservative response; not recommended because even though being valid, it interacts poorly with the rest of Linux TCP, halves cwnd and ssthresh immediately\n - 2 Aggressive response; undoes congestion control measures that are now known to be unnecessary (ignoring the possibility of a lost retransmission that would require TCP to be more cautious), cwnd and ssthresh are restored to the values prior timeout\nDefault: 0 (rate halving based)'),
#'net.ipv4.tcp_challenge_ack_limit':N_(''),
'net.ipv4.tcp_keepalive_intvl':N_('How frequently the probes are send out. Multiplied by tcp_keepalive_probes it is time to kill not responding connection, after probes started.\nDefault: 75\nUnit: second'),
'net.ipv4.tcp_keepalive_probes':N_('How many keepalive probes TCP sends out, until it decides that the connection is broken.\nDefault: 9'),
'net.ipv4.tcp_keepalive_time':N_('How often TCP sends out keepalive messages when keepalive is enabled.\nDefault: 7200.\nUnit: second'),
#'net.ipv4.tcp_limit_output_bytes':N_(''),
'net.ipv4.tcp_low_latency':N_('If set, the TCP stack makes decisions that prefer lower latency as opposed to higher throughput. By default, this option is not set meaning that higher throughput is preferred. An example of an application where this default should be changed would be a Beowulf compute cluster.\nDefault: 0'),
'net.ipv4.tcp_max_orphans':N_('Maximal number of TCP sockets not attached to any user file handle, held by system. If this number is exceeded orphaned connections are reset immediately and warning is printed. This limit exists only to prevent simple DoS attacks, you _must_ not rely on this or lower the limit artificially, but rather increase it (probably, after increasing installed memory), if network conditions require more than default value, and tune network services to linger and kill such states more aggressively. Let me to remind again: each orphan eats up to ~64K of unswappable memory.'),
'net.ipv4.tcp_max_ssthresh':N_('Limited Slow-Start for TCP with large congestion windows (cwnd) defined in RFC3742. Limited slow-start is a mechanism to limit growth of the cwnd on the region where cwnd is larger than tcp_max_ssthresh. TCP increases cwnd by at most tcp_max_ssthresh segments, and by at least tcp_max_ssthresh/2 segments per RTT when the cwnd is above tcp_max_ssthresh. If TCP connection increased cwnd to thousands (or tens of thousands) segments, and thousands of packets were being dropped during slow-start, you can set tcp_max_ssthresh to improve performance for new TCP connection.\nDefault: 0 (off)'),
'net.ipv4.tcp_max_syn_backlog':N_('Maximal number of remembered connection requests, which are still did not receive an acknowledgment from connecting client. Default value is 1024 for systems with more than 128Mb of memory, and 128 for low memory machines. If server suffers of overload, try to increase this number.'),
'net.ipv4.tcp_max_tw_buckets':N_('Maximal number of timewait sockets held by system simultaneously. If this number is exceeded time-wait socket is immediately destroyed and warning is printed. This limit exists only to prevent simple DoS attacks, you _must_ not lower the limit artificially, but rather increase it (probably, after increasing installed memory), if network conditions require more than default value.'),
'net.ipv4.tcp_mem':N_('Vector of 3 values:: min, pressure, max\n - min: below this number of pages TCP is not bothered about its memory appetite.\n - pressure: when amount of memory allocated by TCP exceeds this number of pages, TCP moderates its memory consumption and enters memory pressure mode, which is exited when memory consumption falls under "min".\n - max: number of pages allowed for queueing by all TCP sockets.\nDefaults are calculated at boot time from amount of available memory.'),
'net.ipv4.tcp_moderate_rcvbuf':N_('If set, TCP performs receive buffer auto-tuning, attempting to automatically size the buffer (no greater than tcp_rmem[2]) to match the size required by the path for full throughput.\nDefault: 1'),
'net.ipv4.tcp_mtu_probing':N_('Controls TCP Packetization-Layer Path MTU Discovery. \nTakes three values:\n0 - Disabled\n1 - Disabled by default, enabled when an ICMP black hole detected\n2 - Always enabled, use initial MSS of tcp_base_mss.'),
'net.ipv4.tcp_no_metrics_save':N_('By default, TCP saves various connection metrics in the route cache when the connection closes, so that connections established in the near future can use these to set initial conditions. Usually, this increases overall performance, but may sometimes cause performance degradation. If set, TCP will not cache metrics on closing connections.'),
'net.ipv4.tcp_orphan_retries':N_('This value influences the timeout of a locally closed TCP connection, when RTO retransmissions remain unacknowledged.\nIf your machine is a loaded WEB server, you should think about lowering this value, such sockets may consume significant resources.\nDefault: 8'),
'net.ipv4.tcp_reordering':N_('Maximal reordering of packets in a TCP stream.\nDefault: 3'),
'net.ipv4.tcp_retrans_collapse':N_('Bug-to-bug compatibility with some broken printers. On retransmit try to send bigger packets to work around bugs in certain TCP stacks.'),
'net.ipv4.tcp_retries1':N_('This value influences the time, after which TCP decides, that something is wrong due to unacknowledged RTO retransmissions, and reports this suspicion to the network layer.\nRFC 1122 recommends at least 3 retransmissions, which is the default.\nDefault: 3'),
'net.ipv4.tcp_retries2':N_('This value influences the timeout of an alive TCP connection, when RTO retransmissions remain unacknowledged. Given a value of N, a hypothetical TCP connection following exponential backoff with an initial RTO of TCP_RTO_MIN would retransmit N times before killing the connection at the (N+1)th RTO. The default value of 15 yields a hypothetical timeout of 924.6 seconds and is a lower bound for the effective timeout. TCP will effectively time out at the first RTO which exceeds the hypothetical timeout. RFC 1122 recommends at least 100 seconds for the timeout, which corresponds to a value of at least 8.\Default: 8'),
'net.ipv4.tcp_rfc1337':N_('If set, the TCP stack behaves conforming to RFC1337. If unset, we are not conforming to RFC, but prevent TCP TIME_WAIT assassination.\nDefault: 0'),
'net.ipv4.tcp_rmem':N_('Vector of 3 values: min, default, max\n - min: Minimal size of receive buffer used by TCP sockets. It is guaranteed to each TCP socket, even under moderate memory pressure.\nDefault: 8K\n - default: initial size of receive buffer used by TCP sockets. This value overrides net.core.rmem_default used by other protocols. Default: 87380 bytes. This value results in window of 65535 with default setting of tcp_adv_win_scale and tcp_app_win:0 and a bit less for default tcp_app_win. See below about these variables. - max: maximal size of receive buffer allowed for automatically selected receiver buffers for TCP socket. This value does not override net.core.rmem_max. Calling setsockopt() with SO_RCVBUF disables automatic tuning of that socket\'s receive buffer size, in which case this value is ignored.\nDefault: between 87380B and 4MB, depending on RAM size.'),
'net.ipv4.tcp_sack':N_('Enable select acknowledgments (SACKS).'),
'net.ipv4.tcp_slow_start_after_idle':N_('If set, provide RFC2861 behavior and time out the congestion window after an idle period. An idle period is defined at the current RTO. If unset, the congestion window will not be timed out after an idle period.\nDefault: 1'),
'net.ipv4.tcp_stdurg':N_('Use the Host requirements interpretation of the TCP urgent pointer field. Most hosts use the older BSD interpretation, so if you turn this on Linux might not communicate correctly with them.\nDefault: FALSE'),
'net.ipv4.tcp_synack_retries':N_('Number of times SYNACKs for a passive TCP connection attempt will be retransmitted. Should not be higher than 255.\nDefault: 5, which corresponds to ~180seconds.'),
'net.ipv4.tcp_syncookies':N_('Only valid when the kernel was compiled with CONFIG_SYNCOOKIES\nSend out syncookies when the syn backlog queue of a socket overflows. This is to prevent against the common "SYN flood attack"\nDefault: FALSE'),
'net.ipv4.tcp_syn_retries':N_('Number of times initial SYNs for an active TCP connection attempt will be retransmitted. Should not be higher than 255.\nDefault: 5, which corresponds to ~180seconds.'),
'net.ipv4.tcp_thin_dupack':N_('Enable dynamic triggering of retransmissions after one dupACK for thin streams. If set, a check is performed upon reception of a dupACK to determine if the stream is thin (less than 4 packets in flight). As long as the stream is found to be thin, data is retransmitted on the first received dupACK. This improves retransmission latency for non-aggressive thin streams, often found to be time-dependent.\nDefault: 0'),
'net.ipv4.tcp_thin_linear_timeouts':N_('Enable dynamic triggering of linear timeouts for thin streams. If set, a check is performed upon retransmission by timeout to determine if the stream is thin (less than 4 packets in flight). As long as the stream is found to be thin, up to 6 linear timeouts may be performed before exponential backoff mode is initiated. This improves retransmission latency for non-aggressive thin streams, often found to be time-dependent.\nDefault: 0'),
'net.ipv4.tcp_timestamps':N_('Enable timestamps as defined in RFC1323.'),
'net.ipv4.tcp_tso_win_divisor':N_('This allows control over what percentage of the congestion window can be consumed by a single TSO frame. The setting of this parameter is a choice between burstiness and building larger TSO frames.\nDefault: 3'),
'net.ipv4.tcp_tw_recycle':N_('Enable fast recycling TIME-WAIT sockets. It should not be changed without advice/request of technical experts.\nDefault: 0'),
'net.ipv4.tcp_tw_reuse':N_('Allow to reuse TIME-WAIT sockets for new connections when it is safe from protocol viewpoint. It should not be changed without advice/request of technical experts.\nDefault: 0'),
'net.ipv4.tcp_window_scaling':N_('Enable window scaling as defined in RFC1323.'),
'net.ipv4.tcp_wmem':N_('Vector of 3 values: min, default, max\n - min: Amount of memory reserved for send buffers for TCP sockets. Each TCP socket has rights to use it due to fact of its birth.\nDefault: 4K\n - default: initial size of send buffer used by TCP sockets. This value overrides net.core.wmem_default used by other protocols. It is usually lower than net.core.wmem_default.\nDefault: 16K\n - max: Maximal amount of memory allowed for automatically tuned send buffers for TCP sockets. This value does not override net.core.wmem_max. Calling setsockopt() with SO_SNDBUF disables automatic tuning of that socket\'s send buffer size, in which case this value is ignored.\nDefault: between 64K and 4MB, depending on RAM size.'),
'net.ipv4.tcp_workaround_signed_windows':N_('If set, assume no receipt of a window scaling option means the remote TCP is broken and treats the window as a signed quantity. If unset, assume the remote TCP is not broken even if we do not receive a window scaling option from them.\nDefault: 0'),
'net.ipv4.udp_mem':N_('Vector of 3 values: min, pressure, max\nNumber of pages allowed for queueing by all UDP sockets.\n - min: Below this number of pages UDP is not bothered about its memory appetite. When amount of memory allocated by UDP exceeds this number, UDP starts to moderate memory usage.\n - pressure: This value was introduced to follow format of tcp_mem.\n - max: Number of pages allowed for queueing by all UDP sockets.\nDefault: calculated at boot time from amount of available memory.'),
'net.ipv4.udp_rmem_min':N_('Minimal size of receive buffer used by UDP sockets in moderation. Each UDP socket is able to use the size for receiving data, even if total pages of UDP sockets exceed udp_mem pressure.\nDefault: 4096\nUnit: byte'),
'net.ipv4.udp_wmem_min':N_('Minimal size of send buffer used by UDP sockets in moderation. Each UDP socket is able to use the size for sending data, even if total pages of UDP sockets exceed udp_mem pressure.\nDefault: 4096\nUnit: byte'),
#'net.ipv4.xfrm4_gc_thresh':N_(''),
#'net.ipv4.route.error_burst':N_(''),
#'net.ipv4.route.error_cost':N_(''),
#'net.ipv4.route.flush':N_(''),
#'net.ipv4.route.gc_elasticity':N_(''),
#'net.ipv4.route.gc_interval':N_(''),
#'net.ipv4.route.gc_min_interval':N_(''),
#'net.ipv4.route.gc_min_interval_ms':N_(''),
#'net.ipv4.route.gc_thresh':N_(''),
#'net.ipv4.route.gc_timeout':N_(''),
'net.ipv4.route.max_size':N_('Maximum number of routes allowed in the kernel. Increase this when using large numbers of interfaces and/or routes.'),
#'net.ipv4.route.min_adv_mss':N_(''),
#'net.ipv4.route.min_pmtu':N_(''),
#'net.ipv4.route.mtu_expires':N_(''),
#'net.ipv4.route.redirect_load':N_(''),
#'net.ipv4.route.redirect_number':N_(''),
#'net.ipv4.route.redirect_silence':N_(''),
#'net.ipv4.*.anycast_delay':N_(''),
'net.ipv4.*.app_solicit':N_('The maximum number of probes to send to the user space ARP daemon via netlink before dropping back to multicast probes.\nDefault: 0'),
#'net.ipv4.*.base_reachable_time':N_(''),
#'net.ipv4.*.base_reachable_time_ms':N_(''),
#'net.ipv4.*.delay_first_probe_time':N_(''),
#'net.ipv4.*.gc_interval':N_(''),
#'net.ipv4.*.gc_stale_time':N_(''),
#'net.ipv4.*.gc_thresh1':N_(''),
#'net.ipv4.*.gc_thresh2':N_(''),
#'net.ipv4.*.gc_thresh3':N_(''),
#'net.ipv4.*.locktime':N_(''),
#'net.ipv4.*.mcast_solicit':N_(''),
#'net.ipv4.*.proxy_delay':N_(''),
#'net.ipv4.*.proxy_qlen':N_(''),
#'net.ipv4.*.retrans_time':N_(''),
#'net.ipv4.*.retrans_time_ms':N_(''),
#'net.ipv4.*.ucast_solicit':N_(''),
#'net.ipv4.*.unres_qlen':N_(''),
#'net.ipv4.*.unres_qlen_bytes':N_(''),
#'net.ipv6.bindv6only':N_(''),
#'net.ipv6.*.accept_dad':N_(''),
#'net.ipv6.*.accept_ra':N_(''),
#'net.ipv6.*.accept_ra_defrtr':N_(''),
#'net.ipv6.*.accept_ra_pinfo':N_(''),
#'net.ipv6.*.accept_ra_rt_info_max_plen':N_(''),
#'net.ipv6.*.accept_ra_rtr_pref':N_(''),
#'net.ipv6.*.accept_redirects':N_(''),
#'net.ipv6.*.accept_source_route':N_(''),
#'net.ipv6.*.autoconf':N_(''),
#'net.ipv6.*.dad_transmits':N_(''),
#'net.ipv6.*.disable_ipv6':N_(''),
#'net.ipv6.*.force_mld_version':N_(''),
#'net.ipv6.*.force_tllao':N_(''),
#'net.ipv6.*.forwarding':N_(''),
#'net.ipv6.*.hop_limit':N_(''),
#'net.ipv6.*.max_addresses':N_(''),
#'net.ipv6.*.max_desync_factor':N_(''),
#'net.ipv6.*.mc_forwarding':N_(''),
#'net.ipv6.*.mtu':N_(''),
#'net.ipv6.*.optimistic_dad':N_(''),
#'net.ipv6.*.proxy_ndp':N_(''),
#'net.ipv6.*.regen_max_retry':N_(''),
#'net.ipv6.*.router_probe_interval':N_(''),
#'net.ipv6.*.router_solicitation_delay':N_(''),
#'net.ipv6.*.router_solicitation_interval':N_(''),
#'net.ipv6.*.router_solicitations':N_(''),
#'net.ipv6.*.temp_prefered_lft':N_(''),
#'net.ipv6.*.temp_valid_lft':N_(''),
#'net.ipv6.*.use_tempaddr':N_(''),
#'net.ipv6.icmp.ratelimit':N_(''),
#'net.ipv6.ip6frag_high_thresh':N_(''),
#'net.ipv6.ip6frag_low_thresh':N_(''),
#'net.ipv6.ip6frag_secret_interval':N_(''),
#'net.ipv6.ip6frag_time':N_(''),
#'net.ipv6.mld_max_msf':N_(''),
#'net.ipv6.route.flush':N_(''),
#'net.ipv6.route.gc_elasticity':N_(''),
#'net.ipv6.route.gc_interval':N_(''),
#'net.ipv6.route.gc_min_interval':N_(''),
#'net.ipv6.route.gc_min_interval_ms':N_(''),
#'net.ipv6.route.gc_thresh':N_(''),
#'net.ipv6.route.gc_timeout':N_(''),
#'net.ipv6.route.max_size':N_(''),
#'net.ipv6.route.min_adv_mss':N_(''),
#'net.ipv6.route.mtu_expires':N_(''),
#'net.ipv6.xfrm6_gc_thresh':N_(''),
'kernel.acct':N_('Highwater lowwater frequency\n\nIf BSD-style process accounting is enabled these values control its behaviour. If free space on filesystem where the log lives goes below <lowwater>% accounting suspends. If free space gets above <highwater>% accounting resumes. <Frequency> determines how often do we check the amount of free space (value is in seconds). \nDefault: 4 2 30\nThat is, suspend accounting if there left <= 2% free; resume it if we got >=4%; consider information about amount of free space valid for 30 seconds.'),
'kernel.acpi_video_flags':N_('This allows mode of video boot to be set during run time.'),
'kernel.auto_msgmni':N_('Enables/Disables automatic recomputing of msgmni upon memory add/remove or upon ipc namespace creation/removal (see the msgmni description above). Set this to "1" for enable msgmni automatic recomputing or "0" to disable auto_msgmni. Default value is 1.'),
#'kernel.blk_iopoll':N_(''),
'kernel.bootloader_type':N_('x86 bootloader identification\n\nThis gives the bootloader type number as indicated by the bootloader,shifted left by 4, and OR with the low four bits of the bootloader version. The reason for this encoding is that this used to match the type_of_loader field in the kernel header; the encoding is kept for backwards compatibility. That is, if the full bootloader type number is 0x15 and the full version number is 0x234, this file will contain the value 340 = 0x154.'),
'kernel.bootloader_version':N_('x86 bootloader version\n\nThe complete bootloader version number. In the example above, this file will contain the value 564 = 0x234.'),
'kernel.callhome':N_('Controls the kernel\'s callhome behavior in case of a kernel panic.\n\nThe s390 hardware allows an operating system to send a notification to a service organization (callhome) in case of an operating system panic.\n\nWhen the value in this file is 0 (which is the default behavior) nothing happens in case of a kernel panic. If this value is set to "1" the complete kernel oops message is send to the IBM customer service organization in case the mainframe the Linux operating system is running on has a service contract with IBM.'),
#'kernel.compat-log':N_(''),
'kernel.core_pattern':N_('core_pattern is used to specify a core dumpfile pattern name.\nMax length 128 characters; default value is "core"\nCore_pattern is used as a pattern template for the output filename; certain string patterns (beginning with \'%\') are substituted with their actual values. backward compatibility with core_uses_pid: If core_pattern does not include "%p" (default does not) and core_uses_pid is set, then .PID will be appended to the filename.Corename format specifiers:\n - %<NUL> - \'%\' is dropped\n - %% - output one \'%\'\n - %p - pid\n - %u - uid\n - %g - gid\n - %s - signal number\n - %t - UNIX time of dump\n - %h - hostname\n - %e - executable filename (may be shortened)\n - %E - executable path\n - %<OTHER> both are dropped\nIf the first character of the pattern is a \'|\', the kernel will treat the rest of the pattern as a command to run. The core dump will be written to the standard input of that program instead of to a file.'),
'kernel.core_pipe_limit':N_('This is only applicable when core_pattern is configured to pipe core files to a user space helper (when the first character of core_pattern is a \'|\'). When collecting cores via a pipe to an application, it is occasionally useful for the collecting application to gather data about the crashing process from its /proc/pid directory. In order to do this safely, the kernel must wait for the collecting process to exit, so as not to remove the crashing processes proc files prematurely. This in turn creates the possibility that a misbehaving userspace collecting process can block the reaping of a crashed process simply by never exiting. This sysctl defends against that. It defines how many concurrent crashing processes may be piped to user space applications in parallel. If this value is exceeded, then those crashing processes above that value are noted via the kernel log and their cores are skipped. \n0 is a special value, indicating that unlimited processes may be captured in parallel, but that no waiting will take place (i.e. the collecting process is not guaranteed access to /proc/<crashing pid>/).\nDefault: 0'),
'kernel.core_uses_pid':N_('The default coredump filename is "core". By setting core_uses_pid to 1, the coredump filename becomes core.PID. If core_pattern does not include "%p" (default does not) and core_uses_pid is set, then .PID will be appended to the filename.'),
'kernel.ctrl-alt-del':N_('When the value in this file is 0, ctrl-alt-del is trapped and sent to the init(1) program to handle a graceful restart. When, however, the value is > 0, Linux\'s reaction to a Vulcan Nerve Pinch (tm) will be an immediate reboot, without even syncing its dirty buffers.'),
'kernel.dmesg_restrict':N_('This toggle indicates whether unprivileged users are prevented from using dmesg(8) to view messages from the kernel\'s log buffer. When dmesg_restrict is set to (0) there are no restrictions. When dmesg_restrict is set set to (1), users must have CAP_SYSLOG to use dmesg(8)'),
'kernel.domainname':N_('These can be used to set the NIS/YP domainname of your box in exactly the same way as the commands domainname'),
'kernel.ftrace_dump_on_oops':N_('Make ftrace dump on kernel oops. \nSet to "1" for enable\nSet to "0" for disable'),
'kernel.ftrace_enabled':N_('Ftrace will dump the trace buffers on oops. \nSet to "1" for enable\nSet to "0" for disable'),
'kernel.hostname':N_('These can be used to set the NIS/YP hostname of your box in exactly the same way as the commands hostname'),
'kernel.hotplug':N_('Path for the hotplug policy agent. \nDefault value is "/sbin/hotplug".'),
#'kernel.io_delay_type':N_(''),
#'kernel.java-appletviewer':N_(''),
#'kernel.java-interpreter':N_(''),
#'kernel.keys':N_(''),
#'kernel.keys.gc_delay':N_(''),
#'kernel.keys.maxbytes':N_(''),
#'kernel.keys.maxkeys':N_(''),
#'kernel.keys.root_maxbytes':N_(''),
#'kernel.keys.root_maxkeys':N_(''),
'kernel.kptr_restrict':N_('This toggle indicates whether restrictions are placed on exposing kernel addresses via /proc and other interfaces. When kptr_restrict is set to (0), there are no restrictions. When kptr_restrict is set to (1), the default, kernel pointers printed using the %pK format specifier will be replaced with 0\'sunless the user has CAP_SYSLOG. When kptr_restrict is set to(2), kernel pointers printed using %pK will be replaced with 0\'sregardless of privileges.'),
'kernel.kstack_depth_to_print':N_('Controls the number of words to print when dumping the raw kernel stack.'),
#'kernel.latencytop':N_(''),
'kernel.l2cr':N_('This flag controls the L2 cache of G3 processor boards. If 0, the cache is disabled. Enabled if nonzero.'),
#'kernel.max_lock_depth':N_(''),
#'kernel.modprobe':N_(''),
'kernel.modules_disabled':N_('A toggle value indicating if modules are allowed to be loaded in an otherwise modular kernel. This toggle defaults to off (0), but can be set true (1). Once true, modules can be neither loaded nor unloaded, and the toggle cannot be set back to false.'),
#'kernel.msgmax':N_(''),
#'kernel.msgmnb':N_(''),
#'kernel.msgmni':N_(''),
#'kernel.ngroups_max':N_(''),
'kernel.nmi_watchdog':N_('Enables/Disables the NMI watchdog on x86 systems. When the value is non-zero the NMI watchdog is enabled and will continuously test all online cpus to determine whether or not they are still functioning properly. Currently, passing "nmi_watchdog=" parameter at boot time is required for this function to work.'),
'kernel.osrelease':N_('Value contain OS release version'),
'kernel.ostype':N_('Value contain os type'),
'kernel.overflowgid':N_('If your architecture did not always support 32-bit UIDs (i.e. arm, i386, m68k, sh, and sparc32), a fixed UID will be returned to applications that use the old 16-bit UID system calls, if the actual UID would exceed 65535. These variable allow you to change the value of the fixed UID.\nDefault: 65534'),
'kernel.overflowuid':N_('If your architecture did not always support 32-bit GIDs (i.e. arm, i386, m68k, sh, and sparc32), a fixed GID will be returned to applications that use the old 16-bit GID system calls, if the actual GID would exceed 65535. These variable allow you to change the value of the fixed GID.\nDefault: 65534'),
'kernel.panic':N_('The value represents the number of seconds the kernel waits before rebooting on a panic. When you use the software watchdog, the recommended setting is 60.'),
#'kernel.panic_on_io_nmi':N_(''),
'kernel.panic_on_oops':N_('Controls the kernel\'s behaviour when an oops or BUG is encountered.\n\n0: try to continue operation\n1: panic immediately. If the panic sysctl is also non-zero then the machine will be rebooted.'),
'kernel.panic_on_stackoverflow':N_('Controls the kernel\'s behavior when detecting the overflows of kernel, IRQ and exception stacks except a user stack. This file shows up if CONFIG_DEBUG_STACKOVERFLOW is enabled.\n0: try to continue operation.\n1: panic immediately.'),
'kernel.panic_on_unrecovered_nmi':N_('The default Linux behaviour on an NMI of either memory or unknown is to continue operation. For many environments such as scientific computing it is preferable that the box is taken out and the error dealt with than an uncorrected parity/ECC error get propogated.'),
#'kernel.perf_event_max_sample_rate':N_(''),
#'kernel.perf_event_mlock_kb':N_(''),
#'kernel.perf_event_paranoid':N_(''),
'kernel.pid_max':N_('PID allocation wrap value. When the kernel\'s next PID value reaches this value, it wraps back to a minimum PID value. PIDs of value pid_max or larger are not allocated.'),
#'kernel.poweroff_cmd':N_(''),
'kernel.powersave-nap':N_('If set, Linux-PPC will use the \'nap\' mode of powersaving, otherwise the \'doze\' mode will be used.'),
#'kernel.print-fatal-signals':N_(''),
'kernel.printk':N_('The four values in printk denote: console_loglevel, default_message_loglevel, minimum_console_loglevel and default_console_log level respectively.\n\nThese values influence printk() behavior when printing or logging error messages. See \'man 2 syslog\' for more info on the different loglevels.\n- console_loglevel: messages with a higher priority than this will be printed to the console\n- default_message_loglevel: messages without an explicit priority will be printed with this priority\n- minimum_console_loglevel: minimum (highest) value to which console_loglevel can be set\n- default_console_loglevel: default value for console_loglevel'),
'kernel.printk_delay':N_('Delay each printk message in printk_delay milliseconds. Value from 0 - 10000 is allowed.'),
'kernel.printk_ratelimit':N_('Some warning messages are rate limited. printk_ratelimit specifies the minimum length of time between these messages (in jiffies), by default we allow one every 5 seconds. A value of 0 will disable rate limiting.'),
'kernel.printk_ratelimit_burst':N_('While long term we enforce one message per printk_ratelimit seconds, we do allow a burst of messages to pass through. printk_ratelimit_burst specifies the number of messages we can send before ratelimiting kicks in.'),
#'kernel.pty':N_(''),
#'kernel.pty.max':N_(''),
#'kernel.pty.nr':N_(''),
#'kernel.pty.reserve':N_(''),
#'kernel.random':N_(''),
#'kernel.random.boot_id':N_(''),
#'kernel.random.entropy_avail':N_(''),
#'kernel.random.poolsize':N_(''),
#'kernel.random.read_wakeup_threshold':N_(''),
#'kernel.random.write_wakeup_threshold':N_(''),
#'kernel.random.uuid':N_(''),
#'kernel.randomize_va_space':N_(''),
#'kernel.real-root-dev':N_(''),
#'kernel.sem':N_(''),
#'kernel.sg-big-buff':N_(''),
'kernel.shmall':N_('This parameter sets the total amount of shared memory pages that can be used system wide. Hence, SHMALL should always be at least ceil(shmmax/PAGE_SIZE). The default size for SHMALL is 2097152. That means 2097152*4096 bytes (shmall*PAGE_SIZE) which is 8 GB for max shared memmory size.'),
'kernel.shmmax':N_('This value can be used to query and set the run time limit on the maximum shared memory segment size that can be created. Shared memory segments up to 1Gb are now supported in the kernel.\nDefault: SHMMAX.'),
'kernel.shmmni':N_('This parameter sets the system wide maximum number of shared memory segments.\nDefault: 4096'),
#'kernel.shm_rmid_forced':N_(''),
'kernel.sched_autogroup_enabled':N_('This value that changes substantially how the process scheduler assigns shares of CPU time to each process. With this feature the system will group all processes with the same session ID as a single scheduling entity. Set to "1" for enable OR "0" for disable process auto group.'),
'kernel.sched_cfs_bandwidth_slice_us':N_('For efficiency run-time is transferred between the global pool and CPU local "silos" in a batch fashion. This greatly reduces global accounting pressure on large systems. The amount transferred each time such an update is required is described as the "slice".\nLarger slice values will reduce transfer overheads, while smaller values allow for more fine-grained consumption.\nDefault=5ms\nUnit = us'),
#'kernel.sched_domain.*.*.busy_factor':N_(''),
#'kernel.sched_domain.*.*.busy_idx':N_(''),
#'kernel.sched_domain.*.*.cache_nice_tries':N_(''),
#'kernel.sched_domain.*.*.flags':N_(''),
#'kernel.sched_domain.*.*.forkexec_idx':N_(''),
#'kernel.sched_domain.*.*.idle_idx':N_(''),
#'kernel.sched_domain.*.*.imbalance_pct':N_(''),
#'kernel.sched_domain.*.*.max_interval':N_(''),
#'kernel.sched_domain.*.*.min_interval':N_(''),
#'kernel.sched_domain.*.*.name':N_(''),
#'kernel.sched_domain.*.*.newidle_idx':N_(''),
#'kernel.sched_domain.*.*.wake_idx':N_(''),
'kernel.sched_child_runs_first':N_('A freshly forked child runs before the parent continues execution. Setting this parameter to 1 is beneficial for an application in which the child performs an execution after fork. For example make -j<NO_CPUS> performs better when sched_child_runs_first is turned off.\nDefault: 0'),
'kernel.sched_compat_yield':N_('Enables the aggressive yield behavior of the old 0(1) scheduler. Java applications that use synchronization extensively perform better with this value set to 1. Only use it when you see a drop in performance. \nExpect applications that depend on the sched_yield() syscall behavior to perform better with the value set to 1.\nDefault: 0'),
'kernel.sched_features':N_('Provides information about specific debugging features.'),
'kernel.sched_latency_ns':N_('Targeted preemption latency for CPU bound tasks. Increasing this variable increases a CPU bound task\'s timeslice. A task\'s timeslice is its weighted fair share of the scheduling period:\ntimeslice = scheduling period * (task\'s weight/total weight of tasks in the run queue) The task\'s weight depends on the task\'s nice level and the scheduling policy. Minimum task weight for a SCHED_OTHER task is 15, corresponding to nice 19. The maximum task weight is 88761, corresponding to nice -20.\nTimeslices become smaller as the load increases. When the number of runnable tasks exceeds sched_latency_ns/sched_min_granularity_ns, the slice becomes number_of_running_tasks * sched_min_granularity_ns. Prior to that, the slice is equal to sched_latency_ns.\nThis value also specifies the maximum amount of time during which a sleeping task is considered to be running for entitlement calculations. Increasing this variable increases the amount of time a waking task may consume before being preempted, thus increasing scheduler latency for CPU bound tasks.\nDefault: 20000000\nUnit: ns'),
'kernel.sched_migration_cost_ns':N_('Amount of time after the last execution that a task is considered to be "cache hot" in migration decisions. A "hot" task is less likely to be migrated, so increasing this variable reduces task migrations.\nIf the CPU idle time is higher than expected when there are runnable processes, try reducing this value. If tasks bounce between CPUs or nodes too often, try increasing it.\nDefault: 500000\nUnit: ns'),
'kernel.sched_min_granularity_ns':N_('The minimum time after which a task become eligible to be preempted. The minimum possible preemption granularity\nDefault: 4000000\nUnit: ns'),
'kernel.sched_nr_migrate':N_('Controls how many tasks can be moved across processors through migration software interrupts (softirq). If a large number of tasks is created by SCHED_OTHER policy, they will all be run on the same processor. The default value is 32. Increasing this value gives a performance boost to large SCHED_OTHER threads at the expense of increased latencies for real-time tasks.\nDefault: 32'),
'kernel.sched_rt_period_us':N_('Period over which real-time task bandwidth enforcement is measured.\nDefault: 1000000\nUnit: us'),
'kernel.sched_rt_runtime_us':N_('Quantum allocated to real-time tasks during sched_rt_period_us. Setting to -1 disables RT bandwidth enforcement. By default, RT tasks may consume 95%CPU/sec, thus leaving 5%CPU/sec or 0.05s to be used by SCHED_OTHER tasks. Unit = us'),
#'kernel.sched_shares_window_ns':N_(''),
'kernel.sched_stat_granularity_ns':N_('Specifies the granularity for collecting task scheduler statistics.\nUnit: ns'),
'kernel.sched_time_avg_ms':N_('Period over which we average the RT time consumption.\nDefault: 4ms\nUnit = ms'),
'kernel.sched_tunable_scaling':N_('The initial and re-scaling of tunables\nDefault: Scaled logarithmically\nScaling now takes place on all kind of cpu add/remove events.'),
'kernel.sched_wakeup_granularity_ns':N_('The wake-up preemption granularity. Increasing this variable reduces wake-up preemption, reducing disturbance of compute bound tasks. Lowering it improves wake-up latency and throughput for latency critical tasks, particularly when a short duty cycle load component must compete with CPU bound components.\n\nSettings larger than half of sched_latency_ns will result in zero wake-up preemption and short duty cycle tasks will be unable to compete with CPU hogs effectively.\n\nDefault: 5000000\nUnit: ns'),
#'kernel.softlockup_panic':N_(''),
#'kernel.stack_tracer_enabled':N_(''),
#'kernel.sysrq':N_(''),
'kernel.tainted':N_('Non-zero if the kernel has been tainted. Numeric values, which can be ORed together:\n\n - 1 - A module with a non-GPL license has been loaded, this includes modules with no license. Set by modutils >= 2.4.9 and module-init-tools.\n - 2 - A module was force loaded by insmod -f. Set by modutils >= 2.4.9 and module-init-tools.\n - 4 - Unsafe SMP processors: SMP with CPUs not designed for SMP.\n - 8 - A module was forcibly unloaded from the system by rmmod -f.\n - 16 - A hardware machine check error occurred on the system.\n - 32 - A bad page was discovered on the system.\n - 64 - The user has asked that the system be marked "tainted". This could be because they are running software that directly modifies the hardware, or for other reasons.\n - 128 - The system has died.\n - 256 - The ACPI DSDT has been overridden with one supplied by the user instead of using the one provided by the hardware.\n - 512 - A kernel warning has occurred.\n - 1024 - A module from drivers/staging was loaded.'),
#'kernel.threads-max':N_(''),
#'kernel.timer_migration':N_(''),
'kernel.unknown_nmi_panic':N_('The value in this file affects behavior of handling NMI. When the value is non-zero, unknown NMI is trapped and then panic occurs. At that time, kernel debugging information is displayed on console.'),
#'kernel.usermodehelper':N_(''),
#'kernel.usermodehelper.bset':N_(''),
#'kernel.usermodehelper.inheritable':N_(''),
#'kernel.version':N_(''),
#'kernel.watchdog':N_(''),
#'kernel.watchdog_thresh':N_(''),
'vm.block_dump':N_('Block_dump enables block I/O debugging when set to a nonzero value'),
'vm.compact_memory':N_('When 1 is written to the file, all zones are compacted such that free memory is available in contiguous blocks where possible. This can be important for example in the allocation of huge pages although processes will also directly compact memory as required.'),
'vm.dirty_background_bytes':N_('Contains the amount of dirty memory at which the pdflush background writeback daemon will start writeback.'),
'vm.dirty_background_ratio':N_('Contains, as a percentage of total system memory, the number of pages at which the pdflush background writeback daemon will start writing out dirty data.'),
'vm.dirty_bytes':N_('Contains the amount of dirty memory at which a process generating disk writes will itself start writeback.'),
'vm.dirty_expire_centisecs':N_('This tunable is used to define when dirty data is old enough to be eligible for writeout by the pdflush daemons. It is expressed in 100\'ths of a second. Data which has been dirty in-memory for longer than this interval will be written out next time a pdflush daemon wakes up.'),
'vm.dirty_ratio':N_('Contains, as a percentage of total system memory, the number of pages at which a process which is generating disk writes will itself start writing out dirty data.'),
'vm.dirty_writeback_centisecs':N_('The pdflush writeback daemons will periodically wake up and write old data out to disk. This tunable expresses the interval between those wakeups, in 100\'ths of a second. Setting this to zero disables periodic writeback altogether.'),
'vm.drop_caches':N_('Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free. To free pagecache set to 1, To free dentries and inodes set to 2 and to free pagecache, dentries and inodes set to 3. As this is a non-destructive operation and dirty objects are not freeable, the user should run sync first.'),
'vm.extfrag_threshold':N_('This parameter affects whether the kernel will compact memory or direct reclaim to satisfy a high-order allocation. /proc/extfrag_index shows what the fragmentation index for each order is in each zone in the system. Values tending towards 0 imply allocations would fail due to lack of memory, values towards 1000 imply failures are due to fragmentation and -1 implies that the allocation will succeed as long as watermarks are met. The kernel will not compact memory in a zone if the fragmentation index is <= extfrag_threshold.\nDefault: 500'),
'vm.hugepages_treat_as_movable':N_('This parameter is only useful when kernelcore= is specified at boot time to create ZONE_MOVABLE for pages that may be reclaimed or migrated. Huge pages are not movable so are not normally allocated from ZONE_MOVABLE. A non-zero value written to hugepages_treat_as_movable allows huge pages to be allocated from ZONE_MOVABLE. Once enabled, the ZONE_MOVABLE is treated as an area of memory the huge pages pool can easily grow or shrink within. Assuming that applications are not running that mlock() a lot of memory, it is likely the huge pages pool can grow to the size of ZONE_MOVABLE by repeatedly entering the desired value into nr_hugepages and triggering page reclaim.'),
'vm.hugetlb_shm_group':N_('hugetlb_shm_group contains group id that is allowed to create SysV shared memory segment using hugetlb page.'),
'vm.laptop_mode':N_('laptop_mode is a knob that controls "laptop mode".'),
'vm.legacy_va_layout':N_('If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel will use the legacy (2.4) layout for all processes.'),
'vm.lowmem_reserve_ratio':N_('For some specialised workloads on highmem machines it is dangerous for the kernel to allow process memory to be allocated from the "lowmem" zone. This is because that memory could then be pinned via the mlock() system call, or by unavailability of swapspace. Please do not change this value unless you really know what you are doing'),
'vm.max_map_count':N_('This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling malloc, directly by mmap and mprotect, and also when loading shared libraries. While most applications need less than a thousand maps, certain programs, particularly malloc debuggers, may consume lots of them, e.g., up to one or two maps per allocation.\nDefault: 65536'),
'vm.memory_failure_early_kill':N_('Control how to kill processes when uncorrected memory error (typically a 2bit error in a memory module) is detected in the background by hardware that cannot be handled by the kernel. In some cases (like the page still having a valid copy on disk) the kernel will handle the failure transparently without affecting any applications. But if there is no other uptodate copy of the data it will kill to prevent any data corruptions from propagating. \n\n1: Kill all processes that have the corrupted and not reloadable page mapped as soon as the corruption is detected. Note this is not supported for a few types of pages, like kernel internally allocated data or the swap cache, but works for the majority of user pages. \n\n0: Only unmap the corrupted page from all processes and only kill a process who tries to access it.'),
'vm.memory_failure_recovery':N_('Enable memory failure recovery (when supported by the platform) \n\n1: Attempt recovery.\n\n0: Always panic on a memory failure.'),
'vm.min_free_kbytes':N_('This is used to force the Linux VM to keep a minimum number of kilobytes free. The VM uses this number to compute a watermark[WMARK_MIN] value for each lowmem zone in the system. Each lowmem zone gets a number of reserved free pages based proportionally on its size. \n\nSome minimal amount of memory is needed to satisfy PF_MEMALLOC allocations; if you set this to lower than 1024KB, your system will become subtly broken, and prone to deadlock under high loads.\n\nSetting this too high will OOM your machine instantly.'),
'vm.min_slab_ratio':N_('A percentage of the total pages in each zone. On Zone reclaim (fallback from the local zone occurs) slabs will be reclaimed if more than this percentage of pages in a zone are reclaimable slab pages. This insures that the slab growth stays under control even in NUMA systems that rarely perform global reclaim. \nDefault: 5\nUnit: percent'),
'vm.min_unmapped_ratio':N_('This is a percentage of the total pages in each zone. Zone reclaim will only occur if more than this percentage of pages are in a state that zone_reclaim_mode allows to be reclaimed. \n\n If zone_reclaim_mode has the value 4 OR\'d, then the percentage is compared against all file-backed unmapped pages including swapcache pages and tmpfs files. Otherwise, only unmapped pages backed by normal files but not tmpfs files and similar are considered.\nDefault: 1\nUnit: percent'),
'vm.mmap_min_addr':N_('This file indicates the amount of address space which a user process will be restricted from mmapping. Since kernel null dereference bugs could accidentally operate based on the information in the first couple of pages of memory userspace processes should not be allowed to write to them. By default this value is set to 0 and no protections will be enforced by the security module. Setting this value to something like 64k will allow the vast majority of applications to work correctly and provide defense in depth against future potential kernel bugs.'),
'vm.nr_hugepages':N_('Change the minimum size of the hugepage pool.'),
#'vm.nr_hugepages_mempolicy':N_(''),
'vm.nr_overcommit_hugepages':N_('Change the maximum size of the hugepage pool. The maximum is nr_hugepages + nr_overcommit_hugepages.'),
'vm.nr_pdflush_threads':N_('The current number of pdflush threads. This value is read-only. The value changes according to the number of dirty pages in the system.\n\nWhen necessary, additional pdflush threads are created, one per second, up to nr_pdflush_threads_max.'),
'vm.nr_trim_pages':N_('This value adjusts the excess page trimming behaviour of power-of-2 aligned NOMMU mmap allocations. \n\nA value of 0 disables trimming of allocations entirely, while a value of 1 trims excess pages aggressively. Any value >= 1 acts as the watermark where trimming of allocations is initiated.\nDefault: 1'),
#'vm.numa_zonelist_order':N_(''),
'vm.oom_dump_tasks':N_('Enables a system-wide task dump (excluding kernel threads) to be produced when the kernel performs an OOM-killing and includes such information as pid, uid, tgid, vm size, rss, cpu, oom_adj score, and name. This is helpful to determine why the OOM killer was invoked and to identify the rogue task that caused it.\n\nIf this is set to zero, this information is suppressed. On very large systems with thousands of tasks it may not be feasible to dump the memory state information for each one. Such systems should not be forced to incur a performance penalty in OOM conditions when the information may not be desired.\n\nIf this is set to non-zero, this information is shown whenever the OOM killer actually kills a memory-hogging task.\nDefault: 1 (enabled)'),
'vm.oom_kill_allocating_task':N_('This enables or disables killing the OOM-triggering task in out-of-memory situations.\n\nIf this is set to zero, the OOM killer will scan through the entire tasklist and select a task based on heuristics to kill. This normally selects a rogue memory-hogging task that frees up a large amount of memory when killed.\n\nIf this is set to non-zero, the OOM killer simply kills the task that triggered the out-of-memory condition. This avoids the expensive tasklist scan.\n\nIf panic_on_oom is selected, it takes precedence over whatever value is used in oom_kill_allocating_task.\nDefault 0'),
'vm.overcommit_memory':N_('This value contains a flag that enables memory overcommitment.\n\nWhen this flag is 0, the kernel attempts to estimate the amount of free memory left when userspace requests more memory. When this flag is 1, the kernel pretends there is always enough memory until it actually runs out.\n\nWhen this flag is 2, the kernel uses a "never overcommit" policy that attempts to prevent any overcommit of memory.\n\nThis feature can be very useful because there are a lot of programs that malloc() huge amounts of memory "just-in-case" and don\'t use much of it.\nDefault: 0'),
'vm.overcommit_ratio':N_('When overcommit_memory is set to 2, the committed address space is not permitted to exceed swap plus this percentage of physical RAM.'),
'vm.page-cluster':N_('page-cluster controls the number of pages which are written to swap in a single attempt. The swap I/O size. It is a logarithmic value - setting it to zero means "1 page", setting it to 1 means "2 pages", setting it to 2 means "4 pages", etc. The default value is three (eight pages at a time). There may be some small benefits in tuning this to a different value if your workload is swap-intensive.'),
'vm.panic_on_oom':N_('This enables or disables panic on out-of-memory feature.\n\nIf this is set to 0, the kernel will kill some rogue process, called oom_killer. Usually, oom_killer can kill rogue processes and system will survive.\n\nIf this is set to 1, the kernel panics when out-of-memory happens. However, if a process limits using nodes by mempolicy/cpusets, and those nodes become memory exhaustion status, one process may be killed by oom-killer. No panic occurs in this case.Because other nodes memory may be free. This means system total status may be not fatal yet.\n\nIf this is set to 2, the kernel panics compulsorily even on the above-mentioned. Even oom happens under memory cgroup, the whole system panics. The default value is 0. 1 and 2 are for failover of clustering. Please select either according to your policy of failover. panic_on_oom=2+kdump gives you very strong tool to investigate why oom happens. You can get snapshot.'),
'vm.percpu_pagelist_fraction':N_('This is the fraction of pages at most (high mark pcp->high) in each zone that are allocated for each per cpu page list. The min value for this is 8. It means that we don\'t allow more than 1/8th of pages in each zone to be allocated in any single per_cpu_pagelist. This entry only changes the value of hot per cpu pagelists. User can specify a number like 100 to allocate 1/100th of each zone to each per cpu page list.\n\nThe batch value of each per cpu pagelist is also updated as a result. It is set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8)\n\nThe initial value is zero. Kernel does not use this value at boot time to set the high water marks for each per cpu page list.'),
#'vm.scan_unevictable_pages':N_(''),
'vm.stat_interval':N_('The time interval between which vm statistics are updated.\nDefault: 1\nUnit: seconds'),
'vm.swappiness':N_('This control is used to define how aggressive the kernel will swap memory pages. Higher values will increase agressiveness, lower values decrease the amount of swap.\nDefault: 60\nUnit: percent'),
'vm.vfs_cache_pressure':N_('Controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects.\n\nAt the default value of vfs_cache_pressure=100 the kernel will attempt to reclaim dentries and inodes at a "fair" rate with respect to pagecache and swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will never reclaim dentries and inodes due to memory pressure and this can easily lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes.'),
'vm.zone_reclaim_mode':N_('Zone_reclaim_mode allows someone to set more or less aggressive approaches to reclaim memory when a zone runs out of memory. If it is set to zero then no zone reclaim occurs. Allocations will be satisfied from other zones / nodes in the system.\nThis is value ORed together of\n\n1 = Zone reclaim on\n2 = Zone reclaim writes dirty pages out\n4 = Zone reclaim swaps pages')
}