Command
stringlengths
1
20
Text
stringlengths
86
185k
Summary
stringlengths
101
1.77k
getconf
In the first synopsis form, the getconf utility shall write to the standard output the value of the variable specified by the system_var operand. In the second synopsis form, the getconf utility shall write to the standard output the value of the variable specified by the path_var operand for the path specified by the pathname operand. The value of each configuration variable shall be determined as if it were obtained by calling the function from which it is defined to be available by this volume of POSIX.1‐2017 or by the System Interfaces volume of POSIX.1‐2017 (see the OPERANDS section). The value shall reflect conditions in the current operating environment. The getconf utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -v specification Indicate a specific specification and version for which configuration variables shall be determined. If this option is not specified, the values returned correspond to an implementation default conforming compilation environment. If the command: getconf _POSIX_V7_ILP32_OFF32 does not write "-1\n" or "undefined\n" to standard output, then commands of the form: getconf -v POSIX_V7_ILP32_OFF32 ... determine values for configuration variables corresponding to the POSIX_V7_ILP32_OFF32 compilation environment specified in c99(1p), the EXTENDED DESCRIPTION. If the command: getconf _POSIX_V7_ILP32_OFFBIG does not write "-1\n" or "undefined\n" to standard output, then commands of the form: getconf -v POSIX_V7_ILP32_OFFBIG ... determine values for configuration variables corresponding to the POSIX_V7_ILP32_OFFBIG compilation environment specified in c99(1p), the EXTENDED DESCRIPTION. If the command: getconf _POSIX_V7_LP64_OFF64 does not write "-1\n" or "undefined\n" to standard output, then commands of the form: getconf -v POSIX_V7_LP64_OFF64 ... determine values for configuration variables corresponding to the POSIX_V7_LP64_OFF64 compilation environment specified in c99(1p), the EXTENDED DESCRIPTION. If the command: getconf _POSIX_V7_LPBIG_OFFBIG does not write "-1\n" or "undefined\n" to standard output, then commands of the form: getconf -v POSIX_V7_LPBIG_OFFBIG ... determine values for configuration variables corresponding to the POSIX_V7_LPBIG_OFFBIG compilation environment specified in c99(1p), the EXTENDED DESCRIPTION.
# getconf > Get configuration values from your Linux system. More information: > https://manned.org/getconf.1. * List [a]ll configuration values available: `getconf -a` * List the configuration values for a specific directory: `getconf -a {{path/to/directory}}` * Check if your linux system is a 32-bit or 64-bit: `getconf LONG_BIT` * Check how many processes the current user can run at once: `getconf CHILD_MAX` * List every configuration value and then find patterns with the grep command (i.e every value with MAX in it): `getconf -a | grep MAX`
wget
GNU Wget is a free utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies. Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user's presence, which can be a great hindrance when transferring a lot of data. Wget can follow links in HTML, XHTML, and CSS pages, to create local versions of remote web sites, fully recreating the directory structure of the original site. This is sometimes referred to as "recursive downloading." While doing that, Wget respects the Robot Exclusion Standard (/robots.txt). Wget can be instructed to convert the links in downloaded files to point at the local files, for offline viewing. Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports regetting, it will instruct the server to continue the download from where it left off. Option Syntax Since Wget uses GNU getopt to process command-line arguments, every option has a long form along with the short one. Long options are more convenient to remember, but take time to type. You may freely mix different option styles, or specify options after the command-line arguments. Thus you may write: wget -r --tries=10 http://fly.srk.fer.hr/ -o log The space between the option accepting an argument and the argument may be omitted. Instead of -o log you can write -olog. You may put several options that do not require arguments together, like: wget -drc <URL> This is completely equivalent to: wget -d -r -c <URL> Since the options can be specified after the arguments, you may terminate them with --. So the following will try to download URL -x, reporting failure to log: wget -o log -- -x The options that accept comma-separated lists all respect the convention that specifying an empty list clears its value. This can be useful to clear the .wgetrc settings. For instance, if your .wgetrc sets "exclude_directories" to /cgi-bin, the following example will first reset it, and then set it to exclude /~nobody and /~somebody. You can also clear the lists in .wgetrc. wget -X "" -X /~nobody,/~somebody Most options that do not accept arguments are boolean options, so named because their state can be captured with a yes-or-no ("boolean") variable. For example, --follow-ftp tells Wget to follow FTP links from HTML files and, on the other hand, --no-glob tells it not to perform file globbing on FTP URLs. A boolean option is either affirmative or negative (beginning with --no). All such options share several properties. Unless stated otherwise, it is assumed that the default behavior is the opposite of what the option accomplishes. For example, the documented existence of --follow-ftp assumes that the default is to not follow FTP links from HTML pages. Affirmative options can be negated by prepending the --no- to the option name; negative options can be negated by omitting the --no- prefix. This might seem superfluous---if the default for an affirmative option is to not do something, then why provide a way to explicitly turn it off? But the startup file may in fact change the default. For instance, using "follow_ftp = on" in .wgetrc makes Wget follow FTP links by default, and using --no-follow-ftp is the only way to restore the factory default from the command line. Basic Startup Options -V --version Display the version of Wget. -h --help Print a help message describing all of Wget's command-line options. -b --background Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget- log. -e command --execute command Execute command as if it were a part of .wgetrc. A command thus invoked will be executed after the commands in .wgetrc, thus taking precedence over them. If you need to specify more than one wgetrc command, use multiple instances of -e. Logging and Input File Options -o logfile --output-file=logfile Log all messages to logfile. The messages are normally reported to standard error. -a logfile --append-output=logfile Append to logfile. This is the same as -o, only it appends to logfile instead of overwriting the old log file. If logfile does not exist, a new file is created. -d --debug Turn on debug output, meaning various information important to the developers of Wget if it does not work properly. Your system administrator may have chosen to compile Wget without debug support, in which case -d will not work. Please note that compiling with debug support is always safe---Wget compiled with the debug support will not print any debug info unless requested with -d. -q --quiet Turn off Wget's output. -v --verbose Turn on verbose output, with all the available data. The default output is verbose. -nv --no-verbose Turn off verbose without being completely quiet (use -q for that), which means that error messages and basic information still get printed. --report-speed=type Output bandwidth as type. The only accepted value is bits. -i file --input-file=file Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.) If this function is used, no URLs need be present on the command line. If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. If --force-html is not specified, then file should consist of a series of URLs, one per line. However, if you specify --force-html, the document will be regarded as html. In that case you may have problems with relative links, which you can solve either by adding "<base href="url">" to the documents or by specifying --base=url on the command line. If the file is an external one, the document will be automatically treated as html if the Content-Type matches text/html. Furthermore, the file's location will be implicitly used as base href if none was specified. --input-metalink=file Downloads files covered in local Metalink file. Metalink version 3 and 4 are supported. --keep-badhash Keeps downloaded Metalink's files with a bad hash. It appends .badhash to the name of Metalink's files which have a checksum mismatch, except without overwriting existing files. --metalink-over-http Issues HTTP HEAD request instead of GET and extracts Metalink metadata from response headers. Then it switches to Metalink download. If no valid Metalink metadata is found, it falls back to ordinary HTTP download. Enables Content-Type: application/metalink4+xml files download/processing. --metalink-index=number Set the Metalink application/metalink4+xml metaurl ordinal NUMBER. From 1 to the total number of "application/metalink4+xml" available. Specify 0 or inf to choose the first good one. Metaurls, such as those from a --metalink-over-http, may have been sorted by priority key's value; keep this in mind to choose the right NUMBER. --preferred-location Set preferred location for Metalink resources. This has effect if multiple resources with same priority are available. --xattr Enable use of file system's extended attributes to save the original URL and the Referer HTTP header value if used. Be aware that the URL might contain private information like access tokens or credentials. -F --force-html When input is read from a file, force it to be treated as an HTML file. This enables you to retrieve relative links from existing HTML files on your local disk, by adding "<base href="url">" to HTML, or using the --base command-line option. -B URL --base=URL Resolves relative links using URL as the point of reference, when reading links from an HTML file specified via the -i/--input-file option (together with --force-html, or when the input file was fetched remotely from a server describing it as HTML). This is equivalent to the presence of a "BASE" tag in the HTML input file, with URL as the value for the "href" attribute. For instance, if you specify http://foo/bar/a.html for URL, and Wget reads ../baz/b.html from the input file, it would be resolved to http://foo/baz/b.html . --config=FILE Specify the location of a startup file you wish to use instead of the default one(s). Use --no-config to disable reading of config files. If both --config and --no-config are given, --no-config is ignored. --rejected-log=logfile Logs all URL rejections to logfile as comma separated values. The values include the reason of rejection, the URL and the parent URL it was found in. Download Options --bind-address=ADDRESS When making client TCP/IP connections, bind to ADDRESS on the local machine. ADDRESS may be specified as a hostname or IP address. This option can be useful if your machine is bound to multiple IPs. --bind-dns-address=ADDRESS [libcares only] This address overrides the route for DNS requests. If you ever need to circumvent the standard settings from /etc/resolv.conf, this option together with --dns-servers is your friend. ADDRESS must be specified either as IPv4 or IPv6 address. Wget needs to be built with libcares for this option to be available. --dns-servers=ADDRESSES [libcares only] The given address(es) override the standard nameserver addresses, e.g. as configured in /etc/resolv.conf. ADDRESSES may be specified either as IPv4 or IPv6 addresses, comma-separated. Wget needs to be built with libcares for this option to be available. -t number --tries=number Set number of tries to number. Specify 0 or inf for infinite retrying. The default is to retry 20 times, with the exception of fatal errors like "connection refused" or "not found" (404), which are not retried. -O file --output-document=file The documents will not be written to the appropriate files, but all will be concatenated together and written to file. If - is used as file, documents will be printed to standard output, disabling link conversion. (Use ./- to print to a file literally named -.) Use of -O is not intended to mean simply "use the name file instead of the one in the URL;" rather, it is analogous to shell redirection: wget -O file http://foo is intended to work like wget -O - http://foo > file; file will be truncated immediately, and all downloaded content will be written there. For this reason, -N (for timestamp-checking) is not supported in combination with -O: since file is always newly created, it will always have a very new timestamp. A warning will be issued if this combination is used. Similarly, using -r or -p with -O may not work as you expect: Wget won't just download the first file to file and then download the rest to their normal names: all downloaded content will be placed in file. This was disabled in version 1.11, but has been reinstated (with a warning) in 1.11.2, as there are some cases where this behavior can actually have some use. A combination with -nc is only accepted if the given output file does not exist. Note that a combination with -k is only permitted when downloading a single document, as in that case it will just convert all relative URIs to external ones; -k makes no sense for multiple URIs when they're all being downloaded to a single file; -k can be used only when the output is a regular file. -nc --no-clobber If a file is downloaded more than once in the same directory, Wget's behavior depends on a few options, including -nc. In certain cases, the local file will be clobbered, or overwritten, upon repeated download. In other cases it will be preserved. When running Wget without -N, -nc, -r, or -p, downloading the same file in the same directory will result in the original copy of file being preserved and the second copy being named file.1. If that file is downloaded yet again, the third copy will be named file.2, and so on. (This is also the behavior with -nd, even if -r or -p are in effect.) When -nc is specified, this behavior is suppressed, and Wget will refuse to download newer copies of file. Therefore, ""no-clobber"" is actually a misnomer in this mode---it's not clobbering that's prevented (as the numeric suffixes were already preventing clobbering), but rather the multiple version saving that's prevented. When running Wget with -r or -p, but without -N, -nd, or -nc, re-downloading a file will result in the new copy simply overwriting the old. Adding -nc will prevent this behavior, instead causing the original version to be preserved and any newer copies on the server to be ignored. When running Wget with -N, with or without -r or -p, the decision as to whether or not to download a newer copy of a file depends on the local and remote timestamp and size of the file. -nc may not be specified at the same time as -N. A combination with -O/--output-document is only accepted if the given output file does not exist. Note that when -nc is specified, files with the suffixes .html or .htm will be loaded from the local disk and parsed as if they had been retrieved from the Web. --backups=backups Before (over)writing a file, back up an existing file by adding a .1 suffix (_1 on VMS) to the file name. Such backup files are rotated to .2, .3, and so on, up to backups (and lost beyond that). --no-netrc Do not try to obtain credentials from .netrc file. By default .netrc file is searched for credentials in case none have been passed on command line and authentication is required. -c --continue Continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a previous instance of Wget, or by another program. For instance: wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z If there is a file named ls-lR.Z in the current directory, Wget will assume that it is the first portion of the remote file, and will ask the server to continue the retrieval from an offset equal to the length of the local file. Note that you don't need to specify this option if you just want the current invocation of Wget to retry downloading a file should the connection be lost midway through. This is the default behavior. -c only affects resumption of downloads started prior to this invocation of Wget, and whose local files are still sitting around. Without -c, the previous example would just download the remote file to ls-lR.Z.1, leaving the truncated ls-lR.Z file alone. If you use -c on a non-empty file, and the server does not support continued downloading, Wget will restart the download from scratch and overwrite the existing file entirely. Beginning with Wget 1.7, if you use -c on a file which is of equal size as the one on the server, Wget will refuse to download the file and print an explanatory message. The same happens when the file is smaller on the server than locally (presumably because it was changed on the server since your last download attempt)---because "continuing" is not meaningful, no download occurs. On the other side of the coin, while using -c, any file that's bigger on the server than locally will be considered an incomplete download and only "(length(remote) - length(local))" bytes will be downloaded and tacked onto the end of the local file. This behavior can be desirable in certain cases---for instance, you can use wget -c to download just the new portion that's been appended to a data collection or log file. However, if the file is bigger on the server because it's been changed, as opposed to just appended to, you'll end up with a garbled file. Wget has no way of verifying that the local file is really a valid prefix of the remote file. You need to be especially careful of this when using -c in conjunction with -r, since every file will be considered as an "incomplete download" candidate. Another instance where you'll get a garbled file if you try to use -c is if you have a lame HTTP proxy that inserts a "transfer interrupted" string into the local file. In the future a "rollback" option may be added to deal with this case. Note that -c only works with FTP servers and with HTTP servers that support the "Range" header. --start-pos=OFFSET Start downloading at zero-based position OFFSET. Offset may be expressed in bytes, kilobytes with the `k' suffix, or megabytes with the `m' suffix, etc. --start-pos has higher precedence over --continue. When --start-pos and --continue are both specified, wget will emit a warning then proceed as if --continue was absent. Server support for continued download is required, otherwise --start-pos cannot help. See -c for details. --progress=type Select the type of the progress indicator you wish to use. Legal indicators are "dot" and "bar". The "bar" indicator is used by default. It draws an ASCII progress bar graphics (a.k.a "thermometer" display) indicating the status of retrieval. If the output is not a TTY, the "dot" bar will be used by default. Use --progress=dot to switch to the "dot" display. It traces the retrieval by printing dots on the screen, each dot representing a fixed amount of downloaded data. The progress type can also take one or more parameters. The parameters vary based on the type selected. Parameters to type are passed by appending them to the type sperated by a colon (:) like this: --progress=type:parameter1:parameter2. When using the dotted retrieval, you may set the style by specifying the type as dot:style. Different styles assign different meaning to one dot. With the "default" style each dot represents 1K, there are ten dots in a cluster and 50 dots in a line. The "binary" style has a more "computer"-like orientation---8K dots, 16-dots clusters and 48 dots per line (which makes for 384K lines). The "mega" style is suitable for downloading large files---each dot represents 64K retrieved, there are eight dots in a cluster, and 48 dots on each line (so each line contains 3M). If "mega" is not enough then you can use the "giga" style---each dot represents 1M retrieved, there are eight dots in a cluster, and 32 dots on each line (so each line contains 32M). With --progress=bar, there are currently two possible parameters, force and noscroll. When the output is not a TTY, the progress bar always falls back to "dot", even if --progress=bar was passed to Wget during invocation. This behaviour can be overridden and the "bar" output forced by using the "force" parameter as --progress=bar:force. By default, the bar style progress bar scroll the name of the file from left to right for the file being downloaded if the filename exceeds the maximum length allotted for its display. In certain cases, such as with --progress=bar:force, one may not want the scrolling filename in the progress bar. By passing the "noscroll" parameter, Wget can be forced to display as much of the filename as possible without scrolling through it. Note that you can set the default style using the "progress" command in .wgetrc. That setting may be overridden from the command line. For example, to force the bar output without scrolling, use --progress=bar:force:noscroll. --show-progress Force wget to display the progress bar in any verbosity. By default, wget only displays the progress bar in verbose mode. One may however, want wget to display the progress bar on screen in conjunction with any other verbosity modes like --no-verbose or --quiet. This is often a desired a property when invoking wget to download several small/large files. In such a case, wget could simply be invoked with this parameter to get a much cleaner output on the screen. This option will also force the progress bar to be printed to stderr when used alongside the --output-file option. -N --timestamping Turn on time-stamping. --no-if-modified-since Do not send If-Modified-Since header in -N mode. Send preliminary HEAD request instead. This has only effect in -N mode. --no-use-server-timestamps Don't set the local file's timestamp by the one on the server. By default, when a file is downloaded, its timestamps are set to match those from the remote file. This allows the use of --timestamping on subsequent invocations of wget. However, it is sometimes useful to base the local file's timestamp on when it was actually downloaded; for that purpose, the --no-use-server-timestamps option has been provided. -S --server-response Print the headers sent by HTTP servers and responses sent by FTP servers. --spider When invoked with this option, Wget will behave as a Web spider, which means that it will not download the pages, just check that they are there. For example, you can use Wget to check your bookmarks: wget --spider --force-html -i bookmarks.html This feature needs much more work for Wget to get close to the functionality of real web spiders. -T seconds --timeout=seconds Set the network timeout to seconds seconds. This is equivalent to specifying --dns-timeout, --connect-timeout, and --read-timeout, all at the same time. When interacting with the network, Wget can check for timeout and abort the operation if it takes too long. This prevents anomalies like hanging reads and infinite connects. The only timeout enabled by default is a 900-second read timeout. Setting a timeout to 0 disables it altogether. Unless you know what you are doing, it is best not to change the default timeout settings. All timeout-related options accept decimal values, as well as subsecond values. For example, 0.1 seconds is a legal (though unwise) choice of timeout. Subsecond timeouts are useful for checking server response times or for testing network latency. --dns-timeout=seconds Set the DNS lookup timeout to seconds seconds. DNS lookups that don't complete within the specified time will fail. By default, there is no timeout on DNS lookups, other than that implemented by system libraries. --connect-timeout=seconds Set the connect timeout to seconds seconds. TCP connections that take longer to establish will be aborted. By default, there is no connect timeout, other than that implemented by system libraries. --read-timeout=seconds Set the read (and write) timeout to seconds seconds. The "time" of this timeout refers to idle time: if, at any point in the download, no data is received for more than the specified number of seconds, reading fails and the download is restarted. This option does not directly affect the duration of the entire download. Of course, the remote server may choose to terminate the connection sooner than this option requires. The default read timeout is 900 seconds. --limit-rate=amount Limit the download speed to amount bytes per second. Amount may be expressed in bytes, kilobytes with the k suffix, or megabytes with the m suffix. For example, --limit-rate=20k will limit the retrieval rate to 20KB/s. This is useful when, for whatever reason, you don't want Wget to consume the entire available bandwidth. This option allows the use of decimal numbers, usually in conjunction with power suffixes; for example, --limit-rate=2.5k is a legal value. Note that Wget implements the limiting by sleeping the appropriate amount of time after a network read that took less time than specified by the rate. Eventually this strategy causes the TCP transfer to slow down to approximately the specified rate. However, it may take some time for this balance to be achieved, so don't be surprised if limiting the rate doesn't work well with very small files. -w seconds --wait=seconds Wait the specified number of seconds between the retrievals. Use of this option is recommended, as it lightens the server load by making the requests less frequent. Instead of in seconds, the time can be specified in minutes using the "m" suffix, in hours using "h" suffix, or in days using "d" suffix. Specifying a large value for this option is useful if the network or the destination host is down, so that Wget can wait long enough to reasonably expect the network error to be fixed before the retry. The waiting interval specified by this function is influenced by "--random-wait", which see. --waitretry=seconds If you don't want Wget to wait between every retrieval, but only between retries of failed downloads, you can use this option. Wget will use linear backoff, waiting 1 second after the first failure on a given file, then waiting 2 seconds after the second failure on that file, up to the maximum number of seconds you specify. By default, Wget will assume a value of 10 seconds. --random-wait Some web sites may perform log analysis to identify retrieval programs such as Wget by looking for statistically significant similarities in the time between requests. This option causes the time between requests to vary between 0.5 and 1.5 * wait seconds, where wait was specified using the --wait option, in order to mask Wget's presence from such analysis. A 2001 article in a publication devoted to development on a popular consumer platform provided code to perform this analysis on the fly. Its author suggested blocking at the class C address level to ensure automated retrieval programs were blocked despite changing DHCP-supplied addresses. The --random-wait option was inspired by this ill-advised recommendation to block many unrelated users from a web site due to the actions of one. --no-proxy Don't use proxies, even if the appropriate *_proxy environment variable is defined. -Q quota --quota=quota Specify download quota for automatic retrievals. The value can be specified in bytes (default), kilobytes (with k suffix), or megabytes (with m suffix). Note that quota will never affect downloading a single file. So if you specify wget -Q10k https://example.com/ls-lR.gz, all of the ls-lR.gz will be downloaded. The same goes even when several URLs are specified on the command-line. The quota is checked only at the end of each downloaded file, so it will never result in a partially downloaded file. Thus you may safely type wget -Q2m -i sites---download will be aborted after the file that exhausts the quota is completely downloaded. Setting quota to 0 or to inf unlimits the download quota. --no-dns-cache Turn off caching of DNS lookups. Normally, Wget remembers the IP addresses it looked up from DNS so it doesn't have to repeatedly contact the DNS server for the same (typically small) set of hosts it retrieves from. This cache exists in memory only; a new Wget run will contact DNS again. However, it has been reported that in some situations it is not desirable to cache host names, even for the duration of a short-running application like Wget. With this option Wget issues a new DNS lookup (more precisely, a new call to "gethostbyname" or "getaddrinfo") each time it makes a new connection. Please note that this option will not affect caching that might be performed by the resolving library or by an external caching layer, such as NSCD. If you don't understand exactly what this option does, you probably won't need it. --restrict-file-names=modes Change which characters found in remote URLs must be escaped during generation of local filenames. Characters that are restricted by this option are escaped, i.e. replaced with %HH, where HH is the hexadecimal number that corresponds to the restricted character. This option may also be used to force all alphabetical cases to be either lower- or uppercase. By default, Wget escapes the characters that are not valid or safe as part of file names on your operating system, as well as control characters that are typically unprintable. This option is useful for changing these defaults, perhaps because you are downloading to a non-native partition, or because you want to disable escaping of the control characters, or you want to further restrict characters to only those in the ASCII range of values. The modes are a comma-separated set of text values. The acceptable values are unix, windows, nocontrol, ascii, lowercase, and uppercase. The values unix and windows are mutually exclusive (one will override the other), as are lowercase and uppercase. Those last are special cases, as they do not change the set of characters that would be escaped, but rather force local file paths to be converted either to lower- or uppercase. When "unix" is specified, Wget escapes the character / and the control characters in the ranges 0--31 and 128--159. This is the default on Unix-like operating systems. When "windows" is given, Wget escapes the characters \, |, /, :, ?, ", *, <, >, and the control characters in the ranges 0--31 and 128--159. In addition to this, Wget in Windows mode uses + instead of : to separate host and port in local file names, and uses @ instead of ? to separate the query portion of the file name from the rest. Therefore, a URL that would be saved as www.xemacs.org:4300/search.pl?input=blah in Unix mode would be saved as www.xemacs.org+4300/search.pl@input=blah in Windows mode. This mode is the default on Windows. If you specify nocontrol, then the escaping of the control characters is also switched off. This option may make sense when you are downloading URLs whose names contain UTF-8 characters, on a system which can save and display filenames in UTF-8 (some possible byte values used in UTF-8 byte sequences fall in the range of values designated by Wget as "controls"). The ascii mode is used to specify that any bytes whose values are outside the range of ASCII characters (that is, greater than 127) shall be escaped. This can be useful when saving filenames whose encoding does not match the one used locally. -4 --inet4-only -6 --inet6-only Force connecting to IPv4 or IPv6 addresses. With --inet4-only or -4, Wget will only connect to IPv4 hosts, ignoring AAAA records in DNS, and refusing to connect to IPv6 addresses specified in URLs. Conversely, with --inet6-only or -6, Wget will only connect to IPv6 hosts and ignore A records and IPv4 addresses. Neither options should be needed normally. By default, an IPv6-aware Wget will use the address family specified by the host's DNS record. If the DNS responds with both IPv4 and IPv6 addresses, Wget will try them in sequence until it finds one it can connect to. (Also see "--prefer-family" option described below.) These options can be used to deliberately force the use of IPv4 or IPv6 address families on dual family systems, usually to aid debugging or to deal with broken network configuration. Only one of --inet6-only and --inet4-only may be specified at the same time. Neither option is available in Wget compiled without IPv6 support. --prefer-family=none/IPv4/IPv6 When given a choice of several addresses, connect to the addresses with specified address family first. The address order returned by DNS is used without change by default. This avoids spurious errors and connect attempts when accessing hosts that resolve to both IPv6 and IPv4 addresses from IPv4 networks. For example, www.kame.net resolves to 2001:200:0:8002:203:47ff:fea5:3085 and to 203.178.141.194. When the preferred family is "IPv4", the IPv4 address is used first; when the preferred family is "IPv6", the IPv6 address is used first; if the specified value is "none", the address order returned by DNS is used without change. Unlike -4 and -6, this option doesn't inhibit access to any address family, it only changes the order in which the addresses are accessed. Also note that the reordering performed by this option is stable---it doesn't affect order of addresses of the same family. That is, the relative order of all IPv4 addresses and of all IPv6 addresses remains intact in all cases. --retry-connrefused Consider "connection refused" a transient error and try again. Normally Wget gives up on a URL when it is unable to connect to the site because failure to connect is taken as a sign that the server is not running at all and that retries would not help. This option is for mirroring unreliable sites whose servers tend to disappear for short periods of time. --user=user --password=password Specify the username user and password password for both FTP and HTTP file retrieval. These parameters can be overridden using the --ftp-user and --ftp-password options for FTP connections and the --http-user and --http-password options for HTTP connections. --ask-password Prompt for a password for each connection established. Cannot be specified when --password is being used, because they are mutually exclusive. --use-askpass=command Prompt for a user and password using the specified command. If no command is specified then the command in the environment variable WGET_ASKPASS is used. If WGET_ASKPASS is not set then the command in the environment variable SSH_ASKPASS is used. You can set the default command for use-askpass in the .wgetrc. That setting may be overridden from the command line. --no-iri Turn off internationalized URI (IRI) support. Use --iri to turn it on. IRI support is activated by default. You can set the default state of IRI support using the "iri" command in .wgetrc. That setting may be overridden from the command line. --local-encoding=encoding Force Wget to use encoding as the default system encoding. That affects how Wget converts URLs specified as arguments from locale to UTF-8 for IRI support. Wget use the function nl_langinfo() and then the "CHARSET" environment variable to get the locale. If it fails, ASCII is used. You can set the default local encoding using the "local_encoding" command in .wgetrc. That setting may be overridden from the command line. --remote-encoding=encoding Force Wget to use encoding as the default remote server encoding. That affects how Wget converts URIs found in files from remote encoding to UTF-8 during a recursive fetch. This options is only useful for IRI support, for the interpretation of non-ASCII characters. For HTTP, remote encoding can be found in HTTP "Content-Type" header and in HTML "Content-Type http-equiv" meta tag. You can set the default encoding using the "remoteencoding" command in .wgetrc. That setting may be overridden from the command line. --unlink Force Wget to unlink file instead of clobbering existing file. This option is useful for downloading to the directory with hardlinks. Directory Options -nd --no-directories Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the filenames will get extensions .n). -x --force-directories The opposite of -nd---create a hierarchy of directories, even if one would not have been created otherwise. E.g. wget -x http://fly.srk.fer.hr/robots.txt will save the downloaded file to fly.srk.fer.hr/robots.txt. -nH --no-host-directories Disable generation of host-prefixed directories. By default, invoking Wget with -r http://fly.srk.fer.hr/ will create a structure of directories beginning with fly.srk.fer.hr/. This option disables such behavior. --protocol-directories Use the protocol name as a directory component of local file names. For example, with this option, wget -r http://host will save to http/host/... rather than just to host/.... --cut-dirs=number Ignore number directory components. This is useful for getting a fine-grained control over the directory where recursive retrieval will be saved. Take, for example, the directory at ftp://ftp.xemacs.org/pub/xemacs/. If you retrieve it with -r, it will be saved locally under ftp.xemacs.org/pub/xemacs/. While the -nH option can remove the ftp.xemacs.org/ part, you are still stuck with pub/xemacs. This is where --cut-dirs comes in handy; it makes Wget not "see" number remote directory components. Here are several examples of how --cut-dirs option works. No options -> ftp.xemacs.org/pub/xemacs/ -nH -> pub/xemacs/ -nH --cut-dirs=1 -> xemacs/ -nH --cut-dirs=2 -> . --cut-dirs=1 -> ftp.xemacs.org/xemacs/ ... If you just want to get rid of the directory structure, this option is similar to a combination of -nd and -P. However, unlike -nd, --cut-dirs does not lose with subdirectories---for instance, with -nH --cut-dirs=1, a beta/ subdirectory will be placed to xemacs/beta, as one would expect. -P prefix --directory-prefix=prefix Set directory prefix to prefix. The directory prefix is the directory where all other files and subdirectories will be saved to, i.e. the top of the retrieval tree. The default is . (the current directory). HTTP Options --default-page=name Use name as the default file name when it isn't known (i.e., for URLs that end in a slash), instead of index.html. -E --adjust-extension If a file of type application/xhtml+xml or text/html is downloaded and the URL does not end with the regexp \.[Hh][Tt][Mm][Ll]?, this option will cause the suffix .html to be appended to the local filename. This is useful, for instance, when you're mirroring a remote site that uses .asp pages, but you want the mirrored pages to be viewable on your stock Apache server. Another good use for this is when you're downloading CGI-generated materials. A URL like http://site.com/article.cgi?25 will be saved as article.cgi?25.html. Note that filenames changed in this way will be re-downloaded every time you re-mirror a site, because Wget can't tell that the local X.html file corresponds to remote URL X (since it doesn't yet know that the URL produces output of type text/html or application/xhtml+xml. As of version 1.12, Wget will also ensure that any downloaded files of type text/css end in the suffix .css, and the option was renamed from --html-extension, to better reflect its new behavior. The old option name is still acceptable, but should now be considered deprecated. As of version 1.19.2, Wget will also ensure that any downloaded files with a "Content-Encoding" of br, compress, deflate or gzip end in the suffix .br, .Z, .zlib and .gz respectively. At some point in the future, this option may well be expanded to include suffixes for other types of content, including content types that are not parsed by Wget. --http-user=user --http-password=password Specify the username user and password password on an HTTP server. According to the type of the challenge, Wget will encode them using either the "basic" (insecure), the "digest", or the Windows "NTLM" authentication scheme. Another way to specify username and password is in the URL itself. Either method reveals your password to anyone who bothers to run "ps". To prevent the passwords from being seen, use the --use-askpass or store them in .wgetrc or .netrc, and make sure to protect those files from other users with "chmod". If the passwords are really important, do not leave them lying in those files either---edit the files and delete them after Wget has started the download. --no-http-keep-alive Turn off the "keep-alive" feature for HTTP downloads. Normally, Wget asks the server to keep the connection open so that, when you download more than one document from the same server, they get transferred over the same TCP connection. This saves time and at the same time reduces the load on the server. This option is useful when, for some reason, persistent (keep-alive) connections don't work for you, for example due to a server bug or due to the inability of server-side scripts to cope with the connections. --no-cache Disable server-side cache. In this case, Wget will send the remote server appropriate directives (Cache-Control: no-cache and Pragma: no-cache) to get the file from the remote service, rather than returning the cached version. This is especially useful for retrieving and flushing out-of-date documents on proxy servers. Caching is allowed by default. --no-cookies Disable the use of cookies. Cookies are a mechanism for maintaining server-side state. The server sends the client a cookie using the "Set-Cookie" header, and the client responds with the same cookie upon further requests. Since cookies allow the server owners to keep track of visitors and for sites to exchange this information, some consider them a breach of privacy. The default is to use cookies; however, storing cookies is not on by default. --load-cookies file Load cookies from file before the first HTTP retrieval. file is a textual file in the format originally used by Netscape's cookies.txt file. You will typically use this option when mirroring sites that require that you be logged in to access some or all of their content. The login process typically works by the web server issuing an HTTP cookie upon receiving and verifying your credentials. The cookie is then resent by the browser when accessing that part of the site, and so proves your identity. Mirroring such a site requires Wget to send the same cookies your browser sends when communicating with the site. This is achieved by --load-cookies---simply point Wget to the location of the cookies.txt file, and it will send the same cookies your browser would send in the same situation. Different browsers keep textual cookie files in different locations: "Netscape 4.x." The cookies are in ~/.netscape/cookies.txt. "Mozilla and Netscape 6.x." Mozilla's cookie file is also named cookies.txt, located somewhere under ~/.mozilla, in the directory of your profile. The full path usually ends up looking somewhat like ~/.mozilla/default/some-weird-string/cookies.txt. "Internet Explorer." You can produce a cookie file Wget can use by using the File menu, Import and Export, Export Cookies. This has been tested with Internet Explorer 5; it is not guaranteed to work with earlier versions. "Other browsers." If you are using a different browser to create your cookies, --load-cookies will only work if you can locate or produce a cookie file in the Netscape format that Wget expects. If you cannot use --load-cookies, there might still be an alternative. If your browser supports a "cookie manager", you can use it to view the cookies used when accessing the site you're mirroring. Write down the name and value of the cookie, and manually instruct Wget to send those cookies, bypassing the "official" cookie support: wget --no-cookies --header "Cookie: <name>=<value>" --save-cookies file Save cookies to file before exiting. This will not save cookies that have expired or that have no expiry time (so- called "session cookies"), but also see --keep-session-cookies. --keep-session-cookies When specified, causes --save-cookies to also save session cookies. Session cookies are normally not saved because they are meant to be kept in memory and forgotten when you exit the browser. Saving them is useful on sites that require you to log in or to visit the home page before you can access some pages. With this option, multiple Wget runs are considered a single browser session as far as the site is concerned. Since the cookie file format does not normally carry session cookies, Wget marks them with an expiry timestamp of 0. Wget's --load-cookies recognizes those as session cookies, but it might confuse other browsers. Also note that cookies so loaded will be treated as other session cookies, which means that if you want --save-cookies to preserve them again, you must use --keep-session-cookies again. --ignore-length Unfortunately, some HTTP servers (CGI programs, to be more precise) send out bogus "Content-Length" headers, which makes Wget go wild, as it thinks not all the document was retrieved. You can spot this syndrome if Wget retries getting the same document again and again, each time claiming that the (otherwise normal) connection has closed on the very same byte. With this option, Wget will ignore the "Content-Length" header---as if it never existed. --header=header-line Send header-line along with the rest of the headers in each HTTP request. The supplied header is sent as-is, which means it must contain name and value separated by colon, and must not contain newlines. You may define more than one additional header by specifying --header more than once. wget --header='Accept-Charset: iso-8859-2' \ --header='Accept-Language: hr' \ http://fly.srk.fer.hr/ Specification of an empty string as the header value will clear all previous user-defined headers. As of Wget 1.10, this option can be used to override headers otherwise generated automatically. This example instructs Wget to connect to localhost, but to specify foo.bar in the "Host" header: wget --header="Host: foo.bar" http://localhost/ In versions of Wget prior to 1.10 such use of --header caused sending of duplicate headers. --compression=type Choose the type of compression to be used. Legal values are auto, gzip and none. If auto or gzip are specified, Wget asks the server to compress the file using the gzip compression format. If the server compresses the file and responds with the "Content-Encoding" header field set appropriately, the file will be decompressed automatically. If none is specified, wget will not ask the server to compress the file and will not decompress any server responses. This is the default. Compression support is currently experimental. In case it is turned on, please report any bugs to "[email protected]". --max-redirect=number Specifies the maximum number of redirections to follow for a resource. The default is 20, which is usually far more than necessary. However, on those occasions where you want to allow more (or fewer), this is the option to use. --proxy-user=user --proxy-password=password Specify the username user and password password for authentication on a proxy server. Wget will encode them using the "basic" authentication scheme. Security considerations similar to those with --http-password pertain here as well. --referer=url Include `Referer: url' header in HTTP request. Useful for retrieving documents with server-side processing that assume they are always being retrieved by interactive web browsers and only come out properly when Referer is set to one of the pages that point to them. --save-headers Save the headers sent by the HTTP server to the file, preceding the actual contents, with an empty line as the separator. -U agent-string --user-agent=agent-string Identify as agent-string to the HTTP server. The HTTP protocol allows the clients to identify themselves using a "User-Agent" header field. This enables distinguishing the WWW software, usually for statistical purposes or for tracing of protocol violations. Wget normally identifies as Wget/version, version being the current version number of Wget. However, some sites have been known to impose the policy of tailoring the output according to the "User-Agent"-supplied information. While this is not such a bad idea in theory, it has been abused by servers denying information to clients other than (historically) Netscape or, more frequently, Microsoft Internet Explorer. This option allows you to change the "User-Agent" line issued by Wget. Use of this option is discouraged, unless you really know what you are doing. Specifying empty user agent with --user-agent="" instructs Wget not to send the "User-Agent" header in HTTP requests. --post-data=string --post-file=file Use POST as the method for all HTTP requests and send the specified data in the request body. --post-data sends string as data, whereas --post-file sends the contents of file. Other than that, they work in exactly the same way. In particular, they both expect content of the form "key1=value1&key2=value2", with percent-encoding for special characters; the only difference is that one expects its content as a command-line parameter and the other accepts its content from a file. In particular, --post-file is not for transmitting files as form attachments: those must appear as "key=value" data (with appropriate percent-coding) just like everything else. Wget does not currently support "multipart/form-data" for transmitting POST data; only "application/x-www-form-urlencoded". Only one of --post-data and --post-file should be specified. Please note that wget does not require the content to be of the form "key1=value1&key2=value2", and neither does it test for it. Wget will simply transmit whatever data is provided to it. Most servers however expect the POST data to be in the above format when processing HTML Forms. When sending a POST request using the --post-file option, Wget treats the file as a binary file and will send every character in the POST request without stripping trailing newline or formfeed characters. Any other control characters in the text will also be sent as-is in the POST request. Please be aware that Wget needs to know the size of the POST data in advance. Therefore the argument to "--post-file" must be a regular file; specifying a FIFO or something like /dev/stdin won't work. It's not quite clear how to work around this limitation inherent in HTTP/1.0. Although HTTP/1.1 introduces chunked transfer that doesn't require knowing the request length in advance, a client can't use chunked unless it knows it's talking to an HTTP/1.1 server. And it can't know that until it receives a response, which in turn requires the request to have been completed -- a chicken-and-egg problem. Note: As of version 1.15 if Wget is redirected after the POST request is completed, its behaviour will depend on the response code returned by the server. In case of a 301 Moved Permanently, 302 Moved Temporarily or 307 Temporary Redirect, Wget will, in accordance with RFC2616, continue to send a POST request. In case a server wants the client to change the Request method upon redirection, it should send a 303 See Other response code. This example shows how to log in to a server using POST and then proceed to download the desired pages, presumably only accessible to authorized users: # Log in to the server. This can be done only once. wget --save-cookies cookies.txt \ --post-data 'user=foo&password=bar' \ http://example.com/auth.php # Now grab the page or pages we care about. wget --load-cookies cookies.txt \ -p http://example.com/interesting/article.php If the server is using session cookies to track user authentication, the above will not work because --save-cookies will not save them (and neither will browsers) and the cookies.txt file will be empty. In that case use --keep-session-cookies along with --save-cookies to force saving of session cookies. --method=HTTP-Method For the purpose of RESTful scripting, Wget allows sending of other HTTP Methods without the need to explicitly set them using --header=Header-Line. Wget will use whatever string is passed to it after --method as the HTTP Method to the server. --body-data=Data-String --body-file=Data-File Must be set when additional data needs to be sent to the server along with the Method specified using --method. --body-data sends string as data, whereas --body-file sends the contents of file. Other than that, they work in exactly the same way. Currently, --body-file is not for transmitting files as a whole. Wget does not currently support "multipart/form-data" for transmitting data; only "application/x-www-form-urlencoded". In the future, this may be changed so that wget sends the --body-file as a complete file instead of sending its contents to the server. Please be aware that Wget needs to know the contents of BODY Data in advance, and hence the argument to --body-file should be a regular file. See --post-file for a more detailed explanation. Only one of --body-data and --body-file should be specified. If Wget is redirected after the request is completed, Wget will suspend the current method and send a GET request till the redirection is completed. This is true for all redirection response codes except 307 Temporary Redirect which is used to explicitly specify that the request method should not change. Another exception is when the method is set to "POST", in which case the redirection rules specified under --post-data are followed. --content-disposition If this is set to on, experimental (not fully-functional) support for "Content-Disposition" headers is enabled. This can currently result in extra round-trips to the server for a "HEAD" request, and is known to suffer from a few bugs, which is why it is not currently enabled by default. This option is useful for some file-downloading CGI programs that use "Content-Disposition" headers to describe what the name of a downloaded file should be. When combined with --metalink-over-http and --trust-server-names, a Content-Type: application/metalink4+xml file is named using the "Content-Disposition" filename field, if available. --content-on-error If this is set to on, wget will not skip the content when the server responds with a http status code that indicates error. --trust-server-names If this is set, on a redirect, the local file name will be based on the redirection URL. By default the local file name is based on the original URL. When doing recursive retrieving this can be helpful because in many web sites redirected URLs correspond to an underlying file structure, while link URLs do not. --auth-no-challenge If this option is given, Wget will send Basic HTTP authentication information (plaintext username and password) for all requests, just like Wget 1.10.2 and prior did by default. Use of this option is not recommended, and is intended only to support some few obscure servers, which never send HTTP authentication challenges, but accept unsolicited auth info, say, in addition to form-based authentication. --retry-on-host-error Consider host errors, such as "Temporary failure in name resolution", as non-fatal, transient errors. --retry-on-http-error=code[,code,...] Consider given HTTP response codes as non-fatal, transient errors. Supply a comma-separated list of 3-digit HTTP response codes as argument. Useful to work around special circumstances where retries are required, but the server responds with an error code normally not retried by Wget. Such errors might be 503 (Service Unavailable) and 429 (Too Many Requests). Retries enabled by this option are performed subject to the normal retry timing and retry count limitations of Wget. Using this option is intended to support special use cases only and is generally not recommended, as it can force retries even in cases where the server is actually trying to decrease its load. Please use wisely and only if you know what you are doing. HTTPS (SSL/TLS) Options To support encrypted HTTP (HTTPS) downloads, Wget must be compiled with an external SSL library. The current default is GnuTLS. In addition, Wget also supports HSTS (HTTP Strict Transport Security). If Wget is compiled without SSL support, none of these options are available. --secure-protocol=protocol Choose the secure protocol to be used. Legal values are auto, SSLv2, SSLv3, TLSv1, TLSv1_1, TLSv1_2, TLSv1_3 and PFS. If auto is used, the SSL library is given the liberty of choosing the appropriate protocol automatically, which is achieved by sending a TLSv1 greeting. This is the default. Specifying SSLv2, SSLv3, TLSv1, TLSv1_1, TLSv1_2 or TLSv1_3 forces the use of the corresponding protocol. This is useful when talking to old and buggy SSL server implementations that make it hard for the underlying SSL library to choose the correct protocol version. Fortunately, such servers are quite rare. Specifying PFS enforces the use of the so-called Perfect Forward Security cipher suites. In short, PFS adds security by creating a one-time key for each SSL connection. It has a bit more CPU impact on client and server. We use known to be secure ciphers (e.g. no MD4) and the TLS protocol. This mode also explicitly excludes non-PFS key exchange methods, such as RSA. --https-only When in recursive mode, only HTTPS links are followed. --ciphers Set the cipher list string. Typically this string sets the cipher suites and other SSL/TLS options that the user wish should be used, in a set order of preference (GnuTLS calls it 'priority string'). This string will be fed verbatim to the SSL/TLS engine (OpenSSL or GnuTLS) and hence its format and syntax is dependent on that. Wget will not process or manipulate it in any way. Refer to the OpenSSL or GnuTLS documentation for more information. --no-check-certificate Don't check the server certificate against the available certificate authorities. Also don't require the URL host name to match the common name presented by the certificate. As of Wget 1.10, the default is to verify the server's certificate against the recognized certificate authorities, breaking the SSL handshake and aborting the download if the verification fails. Although this provides more secure downloads, it does break interoperability with some sites that worked with previous Wget versions, particularly those using self-signed, expired, or otherwise invalid certificates. This option forces an "insecure" mode of operation that turns the certificate verification errors into warnings and allows you to proceed. If you encounter "certificate verification" errors or ones saying that "common name doesn't match requested host name", you can use this option to bypass the verification and proceed with the download. Only use this option if you are otherwise convinced of the site's authenticity, or if you really don't care about the validity of its certificate. It is almost always a bad idea not to check the certificates when transmitting confidential or important data. For self-signed/internal certificates, you should download the certificate and verify against that instead of forcing this insecure mode. If you are really sure of not desiring any certificate verification, you can specify --check-certificate=quiet to tell wget to not print any warning about invalid certificates, albeit in most cases this is the wrong thing to do. --certificate=file Use the client certificate stored in file. This is needed for servers that are configured to require certificates from the clients that connect to them. Normally a certificate is not required and this switch is optional. --certificate-type=type Specify the type of the client certificate. Legal values are PEM (assumed by default) and DER, also known as ASN1. --private-key=file Read the private key from file. This allows you to provide the private key in a file separate from the certificate. --private-key-type=type Specify the type of the private key. Accepted values are PEM (the default) and DER. --ca-certificate=file Use file as the file with the bundle of certificate authorities ("CA") to verify the peers. The certificates must be in PEM format. Without this option Wget looks for CA certificates at the system-specified locations, chosen at OpenSSL installation time. --ca-directory=directory Specifies directory containing CA certificates in PEM format. Each file contains one CA certificate, and the file name is based on a hash value derived from the certificate. This is achieved by processing a certificate directory with the "c_rehash" utility supplied with OpenSSL. Using --ca-directory is more efficient than --ca-certificate when many certificates are installed because it allows Wget to fetch certificates on demand. Without this option Wget looks for CA certificates at the system-specified locations, chosen at OpenSSL installation time. --crl-file=file Specifies a CRL file in file. This is needed for certificates that have been revocated by the CAs. --pinnedpubkey=file/hashes Tells wget to use the specified public key file (or hashes) to verify the peer. This can be a path to a file which contains a single public key in PEM or DER format, or any number of base64 encoded sha256 hashes preceded by "sha256//" and separated by ";" When negotiating a TLS or SSL connection, the server sends a certificate indicating its identity. A public key is extracted from this certificate and if it does not exactly match the public key(s) provided to this option, wget will abort the connection before sending or receiving any data. --random-file=file [OpenSSL and LibreSSL only] Use file as the source of random data for seeding the pseudo-random number generator on systems without /dev/urandom. On such systems the SSL library needs an external source of randomness to initialize. Randomness may be provided by EGD (see --egd-file below) or read from an external source specified by the user. If this option is not specified, Wget looks for random data in $RANDFILE or, if that is unset, in $HOME/.rnd. If you're getting the "Could not seed OpenSSL PRNG; disabling SSL." error, you should provide random data using some of the methods described above. --egd-file=file [OpenSSL only] Use file as the EGD socket. EGD stands for Entropy Gathering Daemon, a user-space program that collects data from various unpredictable system sources and makes it available to other programs that might need it. Encryption software, such as the SSL library, needs sources of non- repeating randomness to seed the random number generator used to produce cryptographically strong keys. OpenSSL allows the user to specify his own source of entropy using the "RAND_FILE" environment variable. If this variable is unset, or if the specified file does not produce enough randomness, OpenSSL will read random data from EGD socket specified using this option. If this option is not specified (and the equivalent startup command is not used), EGD is never contacted. EGD is not needed on modern Unix systems that support /dev/urandom. --no-hsts Wget supports HSTS (HTTP Strict Transport Security, RFC 6797) by default. Use --no-hsts to make Wget act as a non-HSTS- compliant UA. As a consequence, Wget would ignore all the "Strict-Transport-Security" headers, and would not enforce any existing HSTS policy. --hsts-file=file By default, Wget stores its HSTS database in ~/.wget-hsts. You can use --hsts-file to override this. Wget will use the supplied file as the HSTS database. Such file must conform to the correct HSTS database format used by Wget. If Wget cannot parse the provided file, the behaviour is unspecified. The Wget's HSTS database is a plain text file. Each line contains an HSTS entry (ie. a site that has issued a "Strict-Transport-Security" header and that therefore has specified a concrete HSTS policy to be applied). Lines starting with a dash ("#") are ignored by Wget. Please note that in spite of this convenient human-readability hand- hacking the HSTS database is generally not a good idea. An HSTS entry line consists of several fields separated by one or more whitespace: "<hostname> SP [<port>] SP <include subdomains> SP <created> SP <max-age>" The hostname and port fields indicate the hostname and port to which the given HSTS policy applies. The port field may be zero, and it will, in most of the cases. That means that the port number will not be taken into account when deciding whether such HSTS policy should be applied on a given request (only the hostname will be evaluated). When port is different to zero, both the target hostname and the port will be evaluated and the HSTS policy will only be applied if both of them match. This feature has been included for testing/development purposes only. The Wget testsuite (in testenv/) creates HSTS databases with explicit ports with the purpose of ensuring Wget's correct behaviour. Applying HSTS policies to ports other than the default ones is discouraged by RFC 6797 (see Appendix B "Differences between HSTS Policy and Same-Origin Policy"). Thus, this functionality should not be used in production environments and port will typically be zero. The last three fields do what they are expected to. The field include_subdomains can either be 1 or 0 and it signals whether the subdomains of the target domain should be part of the given HSTS policy as well. The created and max-age fields hold the timestamp values of when such entry was created (first seen by Wget) and the HSTS-defined value 'max-age', which states how long should that HSTS policy remain active, measured in seconds elapsed since the timestamp stored in created. Once that time has passed, that HSTS policy will no longer be valid and will eventually be removed from the database. If you supply your own HSTS database via --hsts-file, be aware that Wget may modify the provided file if any change occurs between the HSTS policies requested by the remote servers and those in the file. When Wget exits, it effectively updates the HSTS database by rewriting the database file with the new entries. If the supplied file does not exist, Wget will create one. This file will contain the new HSTS entries. If no HSTS entries were generated (no "Strict-Transport-Security" headers were sent by any of the servers) then no file will be created, not even an empty one. This behaviour applies to the default database file (~/.wget-hsts) as well: it will not be created until some server enforces an HSTS policy. Care is taken not to override possible changes made by other Wget processes at the same time over the HSTS database. Before dumping the updated HSTS entries on the file, Wget will re-read it and merge the changes. Using a custom HSTS database and/or modifying an existing one is discouraged. For more information about the potential security threats arose from such practice, see section 14 "Security Considerations" of RFC 6797, specially section 14.9 "Creative Manipulation of HSTS Policy Store". --warc-file=file Use file as the destination WARC file. --warc-header=string Use string into as the warcinfo record. --warc-max-size=size Set the maximum size of the WARC files to size. --warc-cdx Write CDX index files. --warc-dedup=file Do not store records listed in this CDX file. --no-warc-compression Do not compress WARC files with GZIP. --no-warc-digests Do not calculate SHA1 digests. --no-warc-keep-log Do not store the log file in a WARC record. --warc-tempdir=dir Specify the location for temporary files created by the WARC writer. FTP Options --ftp-user=user --ftp-password=password Specify the username user and password password on an FTP server. Without this, or the corresponding startup option, the password defaults to -wget@, normally used for anonymous FTP. Another way to specify username and password is in the URL itself. Either method reveals your password to anyone who bothers to run "ps". To prevent the passwords from being seen, store them in .wgetrc or .netrc, and make sure to protect those files from other users with "chmod". If the passwords are really important, do not leave them lying in those files either---edit the files and delete them after Wget has started the download. --no-remove-listing Don't remove the temporary .listing files generated by FTP retrievals. Normally, these files contain the raw directory listings received from FTP servers. Not removing them can be useful for debugging purposes, or when you want to be able to easily check on the contents of remote server directories (e.g. to verify that a mirror you're running is complete). Note that even though Wget writes to a known filename for this file, this is not a security hole in the scenario of a user making .listing a symbolic link to /etc/passwd or something and asking "root" to run Wget in his or her directory. Depending on the options used, either Wget will refuse to write to .listing, making the globbing/recursion/time-stamping operation fail, or the symbolic link will be deleted and replaced with the actual .listing file, or the listing will be written to a .listing.number file. Even though this situation isn't a problem, though, "root" should never run Wget in a non-trusted user's directory. A user could do something as simple as linking index.html to /etc/passwd and asking "root" to run Wget with -N or -r so the file will be overwritten. --no-glob Turn off FTP globbing. Globbing refers to the use of shell- like special characters (wildcards), like *, ?, [ and ] to retrieve more than one file from the same directory at once, like: wget ftp://gnjilux.srk.fer.hr/*.msg By default, globbing will be turned on if the URL contains a globbing character. This option may be used to turn globbing on or off permanently. You may have to quote the URL to protect it from being expanded by your shell. Globbing makes Wget look for a directory listing, which is system-specific. This is why it currently works only with Unix FTP servers (and the ones emulating Unix "ls" output). --no-passive-ftp Disable the use of the passive FTP transfer mode. Passive FTP mandates that the client connect to the server to establish the data connection rather than the other way around. If the machine is connected to the Internet directly, both passive and active FTP should work equally well. Behind most firewall and NAT configurations passive FTP has a better chance of working. However, in some rare firewall configurations, active FTP actually works when passive FTP doesn't. If you suspect this to be the case, use this option, or set "passive_ftp=off" in your init file. --preserve-permissions Preserve remote file permissions instead of permissions set by umask. --retr-symlinks By default, when retrieving FTP directories recursively and a symbolic link is encountered, the symbolic link is traversed and the pointed-to files are retrieved. Currently, Wget does not traverse symbolic links to directories to download them recursively, though this feature may be added in the future. When --retr-symlinks=no is specified, the linked-to file is not downloaded. Instead, a matching symbolic link is created on the local file system. The pointed-to file will not be retrieved unless this recursive retrieval would have encountered it separately and downloaded it anyway. This option poses a security risk where a malicious FTP Server may cause Wget to write to files outside of the intended directories through a specially crafted .LISTING file. Note that when retrieving a file (not a directory) because it was specified on the command-line, rather than because it was recursed to, this option has no effect. Symbolic links are always traversed in this case. FTPS Options --ftps-implicit This option tells Wget to use FTPS implicitly. Implicit FTPS consists of initializing SSL/TLS from the very beginning of the control connection. This option does not send an "AUTH TLS" command: it assumes the server speaks FTPS and directly starts an SSL/TLS connection. If the attempt is successful, the session continues just like regular FTPS ("PBSZ" and "PROT" are sent, etc.). Implicit FTPS is no longer a requirement for FTPS implementations, and thus many servers may not support it. If --ftps-implicit is passed and no explicit port number specified, the default port for implicit FTPS, 990, will be used, instead of the default port for the "normal" (explicit) FTPS which is the same as that of FTP, 21. --no-ftps-resume-ssl Do not resume the SSL/TLS session in the data channel. When starting a data connection, Wget tries to resume the SSL/TLS session previously started in the control connection. SSL/TLS session resumption avoids performing an entirely new handshake by reusing the SSL/TLS parameters of a previous session. Typically, the FTPS servers want it that way, so Wget does this by default. Under rare circumstances however, one might want to start an entirely new SSL/TLS session in every data connection. This is what --no-ftps-resume-ssl is for. --ftps-clear-data-connection All the data connections will be in plain text. Only the control connection will be under SSL/TLS. Wget will send a "PROT C" command to achieve this, which must be approved by the server. --ftps-fallback-to-ftp Fall back to FTP if FTPS is not supported by the target server. For security reasons, this option is not asserted by default. The default behaviour is to exit with an error. If a server does not successfully reply to the initial "AUTH TLS" command, or in the case of implicit FTPS, if the initial SSL/TLS connection attempt is rejected, it is considered that such server does not support FTPS. Recursive Retrieval Options -r --recursive Turn on recursive retrieving. The default maximum depth is 5. -l depth --level=depth Set the maximum number of subdirectories that Wget will recurse into to depth. In order to prevent one from accidentally downloading very large websites when using recursion this is limited to a depth of 5 by default, i.e., it will traverse at most 5 directories deep starting from the provided URL. Set -l 0 or -l inf for infinite recursion depth. wget -r -l 0 http://<site>/1.html Ideally, one would expect this to download just 1.html. but unfortunately this is not the case, because -l 0 is equivalent to -l inf---that is, infinite recursion. To download a single HTML page (or a handful of them), specify them all on the command line and leave away -r and -l. To download the essential items to view a single HTML page, see page requisites. --delete-after This option tells Wget to delete every single file it downloads, after having done so. It is useful for pre- fetching popular pages through a proxy, e.g.: wget -r -nd --delete-after http://whatever.com/~popular/page/ The -r option is to retrieve recursively, and -nd to not create directories. Note that --delete-after deletes files on the local machine. It does not issue the DELE command to remote FTP sites, for instance. Also note that when --delete-after is specified, --convert-links is ignored, so .orig files are simply not created in the first place. -k --convert-links After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc. Each link will be changed in one of the two ways: • The links to files that have been downloaded by Wget will be changed to refer to the file they point to as a relative link. Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also downloaded, then the link in doc.html will be modified to point to ../bar/img.gif. This kind of transformation works reliably for arbitrary combinations of directories. • The links to files that have not been downloaded by Wget will be changed to include host name and absolute path of the location they point to. Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to ../bar/img.gif), then the link in doc.html will be modified to point to http://hostname/bar/img.gif . Because of this, local browsing works reliably: if a linked file was downloaded, the link will refer to its local name; if it was not downloaded, the link will refer to its full Internet address rather than presenting a broken link. The fact that the former links are converted to relative links ensures that you can move the downloaded hierarchy to another directory. Note that only at the end of the download can Wget know which links have been downloaded. Because of that, the work done by -k will be performed at the end of all the downloads. --convert-file-only This option converts only the filename part of the URLs, leaving the rest of the URLs untouched. This filename part is sometimes referred to as the "basename", although we avoid that term here in order not to cause confusion. It works particularly well in conjunction with --adjust-extension, although this coupling is not enforced. It proves useful to populate Internet caches with files downloaded from different hosts. Example: if some link points to //foo.com/bar.cgi?xyz with --adjust-extension asserted and its local destination is intended to be ./foo.com/bar.cgi?xyz.css, then the link would be converted to //foo.com/bar.cgi?xyz.css. Note that only the filename part has been modified. The rest of the URL has been left untouched, including the net path ("//") which would otherwise be processed by Wget and converted to the effective scheme (ie. "http://"). -K --backup-converted When converting a file, back up the original version with a .orig suffix. Affects the behavior of -N. -m --mirror Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP directory listings. It is currently equivalent to -r -N -l inf --no-remove-listing. -p --page-requisites This option causes Wget to download all the files that are necessary to properly display a given HTML page. This includes such things as inlined images, sounds, and referenced stylesheets. Ordinarily, when downloading a single HTML page, any requisite documents that may be needed to display it properly are not downloaded. Using -r together with -l can help, but since Wget does not ordinarily distinguish between external and inlined documents, one is generally left with "leaf documents" that are missing their requisites. For instance, say document 1.html contains an "<IMG>" tag referencing 1.gif and an "<A>" tag pointing to external document 2.html. Say that 2.html is similar but that its image is 2.gif and it links to 3.html. Say this continues up to some arbitrarily high number. If one executes the command: wget -r -l 2 http://<site>/1.html then 1.html, 1.gif, 2.html, 2.gif, and 3.html will be downloaded. As you can see, 3.html is without its requisite 3.gif because Wget is simply counting the number of hops (up to 2) away from 1.html in order to determine where to stop the recursion. However, with this command: wget -r -l 2 -p http://<site>/1.html all the above files and 3.html's requisite 3.gif will be downloaded. Similarly, wget -r -l 1 -p http://<site>/1.html will cause 1.html, 1.gif, 2.html, and 2.gif to be downloaded. One might think that: wget -r -l 0 -p http://<site>/1.html would download just 1.html and 1.gif, but unfortunately this is not the case, because -l 0 is equivalent to -l inf---that is, infinite recursion. To download a single HTML page (or a handful of them, all specified on the command-line or in a -i URL input file) and its (or their) requisites, simply leave off -r and -l: wget -p http://<site>/1.html Note that Wget will behave as if -r had been specified, but only that single page and its requisites will be downloaded. Links from that page to external documents will not be followed. Actually, to download a single page and all its requisites (even if they exist on separate websites), and make sure the lot displays properly locally, this author likes to use a few options in addition to -p: wget -E -H -k -K -p http://<site>/<document> To finish off this topic, it's worth knowing that Wget's idea of an external document link is any URL specified in an "<A>" tag, an "<AREA>" tag, or a "<LINK>" tag other than "<LINK REL="stylesheet">". --strict-comments Turn on strict parsing of HTML comments. The default is to terminate comments at the first occurrence of -->. According to specifications, HTML comments are expressed as SGML declarations. Declaration is special markup that begins with <! and ends with >, such as <!DOCTYPE ...>, that may contain comments between a pair of -- delimiters. HTML comments are "empty declarations", SGML declarations without any non-comment text. Therefore, <!--foo--> is a valid comment, and so is <!--one-- --two-->, but <!--1--2--> is not. On the other hand, most HTML writers don't perceive comments as anything other than text delimited with <!-- and -->, which is not quite the same. For example, something like <!------------> works as a valid comment as long as the number of dashes is a multiple of four (!). If not, the comment technically lasts until the next --, which may be at the other end of the document. Because of this, many popular browsers completely ignore the specification and implement what users have come to expect: comments delimited with <!-- and -->. Until version 1.9, Wget interpreted comments strictly, which resulted in missing links in many web pages that displayed fine in browsers, but had the misfortune of containing non- compliant comments. Beginning with version 1.9, Wget has joined the ranks of clients that implements "naive" comments, terminating each comment at the first occurrence of -->. If, for whatever reason, you want strict comment parsing, use this option to turn it on. Recursive Accept/Reject Options -A acclist --accept acclist -R rejlist --reject rejlist Specify comma-separated lists of file name suffixes or patterns to accept or reject. Note that if any of the wildcard characters, *, ?, [ or ], appear in an element of acclist or rejlist, it will be treated as a pattern, rather than a suffix. In this case, you have to enclose the pattern into quotes to prevent your shell from expanding it, like in -A "*.mp3" or -A '*.mp3'. --accept-regex urlregex --reject-regex urlregex Specify a regular expression to accept or reject the complete URL. --regex-type regextype Specify the regular expression type. Possible types are posix or pcre. Note that to be able to use pcre type, wget has to be compiled with libpcre support. -D domain-list --domains=domain-list Set domains to be followed. domain-list is a comma-separated list of domains. Note that it does not turn on -H. --exclude-domains domain-list Specify the domains that are not to be followed. --follow-ftp Follow FTP links from HTML documents. Without this option, Wget will ignore all the FTP links. --follow-tags=list Wget has an internal table of HTML tag / attribute pairs that it considers when looking for linked documents during a recursive retrieval. If a user wants only a subset of those tags to be considered, however, he or she should be specify such tags in a comma-separated list with this option. --ignore-tags=list This is the opposite of the --follow-tags option. To skip certain HTML tags when recursively looking for documents to download, specify them in a comma-separated list. In the past, this option was the best bet for downloading a single page and its requisites, using a command-line like: wget --ignore-tags=a,area -H -k -K -r http://<site>/<document> However, the author of this option came across a page with tags like "<LINK REL="home" HREF="/">" and came to the realization that specifying tags to ignore was not enough. One can't just tell Wget to ignore "<LINK>", because then stylesheets will not be downloaded. Now the best bet for downloading a single page and its requisites is the dedicated --page-requisites option. --ignore-case Ignore case when matching files and directories. This influences the behavior of -R, -A, -I, and -X options, as well as globbing implemented when downloading from FTP sites. For example, with this option, -A "*.txt" will match file1.txt, but also file2.TXT, file3.TxT, and so on. The quotes in the example are to prevent the shell from expanding the pattern. -H --span-hosts Enable spanning across hosts when doing recursive retrieving. -L --relative Follow relative links only. Useful for retrieving a specific home page without any distractions, not even those from the same hosts. -I list --include-directories=list Specify a comma-separated list of directories you wish to follow when downloading. Elements of list may contain wildcards. -X list --exclude-directories=list Specify a comma-separated list of directories you wish to exclude from download. Elements of list may contain wildcards. -np --no-parent Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded.
# wget > Download files from the Web. Supports HTTP, HTTPS, and FTP. More > information: https://www.gnu.org/software/wget. * Download the contents of a URL to a file (named "foo" in this case): `wget {{https://example.com/foo}}` * Download the contents of a URL to a file (named "bar" in this case): `wget --output-document {{bar}} {{https://example.com/foo}}` * Download a single web page and all its resources with 3-second intervals between requests (scripts, stylesheets, images, etc.): `wget --page-requisites --convert-links --wait=3 {{https://example.com/somepage.html}}` * Download all listed files within a directory and its sub-directories (does not download embedded page elements): `wget --mirror --no-parent {{https://example.com/somepath/}}` * Limit the download speed and the number of connection retries: `wget --limit-rate={{300k}} --tries={{100}} {{https://example.com/somepath/}}` * Download a file from an HTTP server using Basic Auth (also works for FTP): `wget --user={{username}} --password={{password}} {{https://example.com}}` * Continue an incomplete download: `wget --continue {{https://example.com}}` * Download all URLs stored in a text file to a specific directory: `wget --directory-prefix {{path/to/directory}} --input-file {{URLs.txt}}`
systemd-mount
systemd-mount may be used to create and start a transient .mount or .automount unit of the file system WHAT on the mount point WHERE. In many ways, systemd-mount is similar to the lower-level mount(8) command, however instead of executing the mount operation directly and immediately, systemd-mount schedules it through the service manager job queue, so that it may pull in further dependencies (such as parent mounts, or a file system checker to execute a priori), and may make use of the auto-mounting logic. The command takes either one or two arguments. If only one argument is specified it should refer to a block device or regular file containing a file system (e.g. "/dev/sdb1" or "/path/to/disk.img"). The block device or image file is then probed for a file system label and other metadata, and is mounted to a directory below /run/media/system/ whose name is generated from the file system label. In this mode the block device or image file must exist at the time of invocation of the command, so that it may be probed. If the device is found to be a removable block device (e.g. a USB stick), an automount point is created instead of a regular mount point (i.e. the --automount= option is implied, see below). If two arguments are specified, the first indicates the mount source (the WHAT) and the second indicates the path to mount it on (the WHERE). In this mode no probing of the source is attempted, and a backing device node doesn't have to exist. However, if this mode is combined with --discover, device node probing for additional metadata is enabled, and – much like in the single-argument case discussed above – the specified device has to exist at the time of invocation of the command. Use the --list command to show a terse table of all local, known block devices with file systems that may be mounted with this command. systemd-umount can be used to unmount a mount or automount point. It is the same as systemd-mount --umount. The following options are understood: --no-block Do not synchronously wait for the requested operation to finish. If this is not specified, the job will be verified, enqueued and systemd-mount will wait until the mount or automount unit's start-up is completed. By passing this argument, it is only verified and enqueued. -l, --full Do not ellipsize the output when --list is specified. --no-pager Do not pipe output into a pager. --no-legend Do not print the legend, i.e. column headers and the footer with hints. --no-ask-password Do not query the user for authentication for privileged operations. --quiet, -q Suppresses additional informational output while running. --discover Enable probing of the mount source. This switch is implied if a single argument is specified on the command line. If passed, additional metadata is read from the device to enhance the unit to create. For example, a descriptive string for the transient units is generated from the file system label and device model. Moreover if a removable block device (e.g. USB stick) is detected an automount unit instead of a regular mount unit is created, with a short idle timeout, in order to ensure the file-system is placed in a clean state quickly after each access. --type=, -t Specifies the file system type to mount (e.g. "vfat" or "ext4"). If omitted or set to "auto", the file system type is determined automatically. --options=, -o Additional mount options for the mount point. --owner=USER Let the specified user USER own the mounted file system. This is done by appending uid= and gid= options to the list of mount options. Only certain file systems support this option. --fsck= Takes a boolean argument, defaults to on. Controls whether to run a file system check immediately before the mount operation. In the automount case (see --automount= below) the check will be run the moment the first access to the device is made, which might slightly delay the access. --description= Provide a description for the mount or automount unit. See Description= in systemd.unit(5). --property=, -p Sets a unit property for the mount unit that is created. This takes an assignment in the same format as systemctl(1)'s set-property command. --automount= Takes a boolean argument. Controls whether to create an automount point or a regular mount point. If true an automount point is created that is backed by the actual file system at the time of first access. If false a plain mount point is created that is backed by the actual file system immediately. Automount points have the benefit that the file system stays unmounted and hence in clean state until it is first accessed. In automount mode the --timeout-idle-sec= switch (see below) may be used to ensure the mount point is unmounted automatically after the last access and an idle period passed. If this switch is not specified it defaults to false. If not specified and --discover is used (or only a single argument passed, which implies --discover, see above), and the file system block device is detected to be removable, it is set to true, in order to increase the chance that the file system is in a fully clean state if the device is unplugged abruptly. -A Equivalent to --automount=yes. --timeout-idle-sec= Takes a time value that controls the idle timeout in automount mode. If set to "infinity" (the default) no automatic unmounts are done. Otherwise the file system backing the automount point is detached after the last access and the idle timeout passed. See systemd.time(7) for details on the time syntax supported. This option has no effect if only a regular mount is established, and automounting is not used. Note that if --discover is used (or only a single argument passed, which implies --discover, see above), and the file system block device is detected to be removable, --timeout-idle-sec=1s is implied. --automount-property= Similar to --property=, but applies additional properties to the automount unit created, instead of the mount unit. --bind-device This option only has an effect in automount mode, and controls whether the automount unit shall be bound to the backing device's lifetime. If set, the automount unit will be stopped automatically when the backing device vanishes. By default the automount unit stays around, and subsequent accesses will block until backing device is replugged. This option has no effect in case of non-device mounts, such as network or virtual file system mounts. Note that if --discover is used (or only a single argument passed, which implies --discover, see above), and the file system block device is detected to be removable, this option is implied. --list Instead of establishing a mount or automount point, print a terse list of block devices containing file systems that may be mounted with "systemd-mount", along with useful metadata such as labels, etc. -u, --umount Stop the mount and automount units corresponding to the specified mount points WHERE or the devices WHAT. systemd-mount with this option or systemd-umount can take multiple arguments which can be mount points, devices, /etc/fstab style node names, or backing files corresponding to loop devices, like systemd-mount --umount /path/to/umount /dev/sda1 UUID=xxxxxx-xxxx LABEL=xxxxx /path/to/disk.img. Note that when -H or -M is specified, only absolute paths to mount points are supported. -G, --collect Unload the transient unit after it completed, even if it failed. Normally, without this option, all mount units that mount and failed are kept in memory until the user explicitly resets their failure state with systemctl reset-failed or an equivalent command. On the other hand, units that stopped successfully are unloaded immediately. If this option is turned on the "garbage collection" of units is more aggressive, and unloads units regardless if they exited successfully or failed. This option is a shortcut for --property=CollectMode=inactive-or-failed, see the explanation for CollectMode= in systemd.unit(5) for further information. --user Talk to the service manager of the calling user, rather than the service manager of the system. --system Talk to the service manager of the system. This is the implied default. -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user [email protected]"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. -h, --help Print a short help text and exit. --version Print a short version string and exit.
# systemd-mount > Establish and destroy transient mount or auto-mount points. More > information: https://www.freedesktop.org/software/systemd/man/systemd- > mount.html. * Mount a file system (image or block device) at `/run/media/system/LABEL` where LABEL is the filesystem label or the device name if there is no label: `systemd-mount {{path/to/file_or_device}}` * Mount a file system (image or block device) at a specific location: `systemd-mount {{path/to/file_or_device}} {{path/to/mount_point}}` * Show a list of all local, known block devices with file systems that may be mounted: `systemd-mount --list` * Create an automount point that mounts the actual file system at the time of first access: `systemd-mount --automount=yes {{path/to/file_or_device}}` * Unmount one or more devices: `systemd-mount --umount {{path/to/mount_point_or_device1}} {{path/to/mount_point_or_device2}}` * Mount a file system (image or block device) with a specific file system type: `systemd-mount --type={{file_system_type}} {{path/to/file_or_device}} {{path/to/mount_point}}` * Mount a file system (image or block device) with additional mount options: `systemd-mount --options={{mount_options}} {{path/to/file_or_device}} {{path/to/mount_point}}`
date
The date utility shall write the date and time to standard output or attempt to set the system date and time. By default, the current date and time shall be written. If an operand beginning with '+' is specified, the output format of date shall be controlled by the conversion specifications and other text in the operand. The date utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following option shall be supported: -u Perform operations as if the TZ environment variable was set to the string "UTC0", or its equivalent historical value of "GMT0". Otherwise, date shall use the timezone indicated by the TZ environment variable or the system default if that variable is unset or null.
# date > Set or display the system date. More information: > https://ss64.com/osx/date.html. * Display the current date using the default locale's format: `date +%c` * Display the current date in UTC and ISO 8601 format: `date -u +%Y-%m-%dT%H:%M:%SZ` * Display the current date as a Unix timestamp (seconds since the Unix epoch): `date +%s` * Display a specific date (represented as a Unix timestamp) using the default format: `date -r 1473305798`
mcookie
mcookie generates a 128-bit random hexadecimal number for use with the X authority system. Typical usage: xauth add :0 . mcookie The "random" number generated is actually the MD5 message digest of random information coming from one of the sources getrandom(2) system call, /dev/urandom, /dev/random, or the libc pseudo-random functions, in this preference order. See also the option --file. -f, --file file Use this file as an additional source of randomness (for example /dev/urandom). When file is '-', characters are read from standard input. -m, --max-size number Read from file only this number of bytes. This option is meant to be used when reading additional randomness from a file or device. The number argument may be followed by the multiplicative suffixes KiB=1024, MiB=1024*1024, and so on for GiB, TiB, PiB, EiB, ZiB and YiB (the "iB" is optional, e.g., "K" has the same meaning as "KiB") or the suffixes KB=1000, MB=1000*1000, and so on for GB, TB, PB, EB, ZB and YB. -v, --verbose Inform where randomness originated, with amount of entropy read from each source. -h, --help Display help text and exit. -V, --version Print version and exit.
# mcookie > Generates random 128-bit hexadecimal numbers. More information: > https://manned.org/mcookie. * Generate a random number: `mcookie` * Generate a random number, using the contents of a file as a seed for the randomness: `mcookie --file {{path/to/file}}` * Generate a random number, using a specific number of bytes from a file as a seed for the randomness: `mcookie --file {{path/to/file}} --max-size {{number_of_bytes}}` * Print the details of the randomness used, such as the origin and seed for each source: `mcookie --verbose`
scriptreplay
This program replays a typescript, using timing information to ensure that output happens in the same rhythm as it originally appeared when the script was recorded. The replay simply displays the information again; the programs that were run when the typescript was being recorded are not run again. Since the same information is simply being displayed, scriptreplay is only guaranteed to work properly if run on the same type of terminal the typescript was recorded on. Otherwise, any escape characters in the typescript may be interpreted differently by the terminal to which scriptreplay is sending its output. The timing information is what script(1) outputs to file specified by --log-timing. By default, the typescript to display is assumed to be named typescript, but other filenames may be specified, as the second parameter or with option --log-out. If the third parameter or --divisor is specified, it is used as a speed-up multiplier. For example, a speed-up of 2 makes scriptreplay go twice as fast, and a speed-down of 0.1 makes it go ten times slower than the original session. -I, --log-in file File containing script's terminal input. -O, --log-out file File containing script's terminal output. -B, --log-io file File containing script's terminal output and input. -t, --timing file File containing script's timing output. This option overrides old-style arguments. -T, --log-timing file This is an alias for -t, maintained for compatibility with script(1) command-line options. -s, --typescript file File containing script's terminal output. Deprecated alias to --log-out. This option overrides old-style arguments. -c, --cr-mode mode Specifies how to use the CR (0x0D, carriage return) character from log files. The default mode is auto, in this case CR is replaced with line break for stdin log, because otherwise scriptreplay would overwrite the same line. The other modes are never and always. -d, --divisor number Speed up the replay displaying this number of times. The argument is a floating-point number. It’s called divisor because it divides the timings by this factor. This option overrides old-style arguments. -m, --maxdelay number Set the maximum delay between updates to number of seconds. The argument is a floating-point number. This can be used to avoid long pauses in the typescript replay. --summary Display details about the session recorded in the specified timing file and exit. The session has to be recorded using advanced format (see script(1) option --logging-format for more details). -x, --stream type Forces scriptreplay to print only the specified stream. The supported stream types are in, out, signal, or info. This option is recommended for multi-stream logs (e.g., --log-io) in order to print only specified data. -h, --help Display help text and exit. -V, --version Print version and exit.
# scriptreplay > Replay a typescript created by the `script` command to `stdout`. More > information: https://manned.org/scriptreplay. * Replay a typescript at the speed it was recorded: `scriptreplay {{path/to/timing_file}} {{path/to/typescript}}` * Replay a typescript at double the original speed: `scriptreplay {{path/to/timingfile}} {{path/to/typescript}} 2` * Replay a typescript at half the original speed: `scriptreplay {{path/to/timingfile}} {{path/to/typescript}} 0.5`
git-repack
This command is used to combine all objects that do not currently reside in a "pack", into a pack. It can also be used to re-organize existing packs into a single, more efficient pack. A pack is a collection of objects, individually compressed, with delta compression applied, stored in a single file, with an associated index file. Packs are used to reduce the load on mirror systems, backup engines, disk storage, etc. -a Instead of incrementally packing the unpacked objects, pack everything referenced into a single pack. Especially useful when packing a repository that is used for private development. Use with -d. This will clean up the objects that git prune leaves behind, but git fsck --full --dangling shows as dangling. Note that users fetching over dumb protocols will have to fetch the whole new pack in order to get any contained object, no matter how many other objects in that pack they already have locally. Promisor packfiles are repacked separately: if there are packfiles that have an associated ".promisor" file, these packfiles will be repacked into another separate pack, and an empty ".promisor" file corresponding to the new separate pack will be written. -A Same as -a, unless -d is used. Then any unreachable objects in a previous pack become loose, unpacked objects, instead of being left in the old pack. Unreachable objects are never intentionally added to a pack, even when repacking. This option prevents unreachable objects from being immediately deleted by way of being left in the old pack and then removed. Instead, the loose unreachable objects will be pruned according to normal expiry rules with the next git gc invocation. See git-gc(1). -d After packing, if the newly created packs make some existing packs redundant, remove the redundant packs. Also run git prune-packed to remove redundant loose object files. --cruft Same as -a, unless -d is used. Then any unreachable objects are packed into a separate cruft pack. Unreachable objects can be pruned using the normal expiry rules with the next git gc invocation (see git-gc(1)). Incompatible with -k. --cruft-expiration=<approxidate> Expire unreachable objects older than <approxidate> immediately instead of waiting for the next git gc invocation. Only useful with --cruft -d. --expire-to=<dir> Write a cruft pack containing pruned objects (if any) to the directory <dir>. This option is useful for keeping a copy of any pruned objects in a separate directory as a backup. Only useful with --cruft -d. -l Pass the --local option to git pack-objects. See git-pack-objects(1). -f Pass the --no-reuse-delta option to git-pack-objects, see git-pack-objects(1). -F Pass the --no-reuse-object option to git-pack-objects, see git-pack-objects(1). -q, --quiet Show no progress over the standard error stream and pass the -q option to git pack-objects. See git-pack-objects(1). -n Do not update the server information with git update-server-info. This option skips updating local catalog files needed to publish this repository (or a direct copy of it) over HTTP or FTP. See git-update-server-info(1). --window=<n>, --depth=<n> These two options affect how the objects contained in the pack are stored using delta compression. The objects are first internally sorted by type, size and optionally names and compared against the other objects within --window to see if using delta compression saves space. --depth limits the maximum delta depth; making it too deep affects the performance on the unpacker side, because delta data needs to be applied that many times to get to the necessary object. The default value for --window is 10 and --depth is 50. The maximum depth is 4095. --threads=<n> This option is passed through to git pack-objects. --window-memory=<n> This option provides an additional limit on top of --window; the window size will dynamically scale down so as to not take up more than <n> bytes in memory. This is useful in repositories with a mix of large and small objects to not run out of memory with a large window, but still be able to take advantage of the large window for the smaller objects. The size can be suffixed with "k", "m", or "g". --window-memory=0 makes memory usage unlimited. The default is taken from the pack.windowMemory configuration variable. Note that the actual memory usage will be the limit multiplied by the number of threads used by git-pack-objects(1). --max-pack-size=<n> Maximum size of each output pack file. The size can be suffixed with "k", "m", or "g". The minimum size allowed is limited to 1 MiB. If specified, multiple packfiles may be created, which also prevents the creation of a bitmap index. The default is unlimited, unless the config variable pack.packSizeLimit is set. Note that this option may result in a larger and slower repository; see the discussion in pack.packSizeLimit. -b, --write-bitmap-index Write a reachability bitmap index as part of the repack. This only makes sense when used with -a, -A or -m, as the bitmaps must be able to refer to all reachable objects. This option overrides the setting of repack.writeBitmaps. This option has no effect if multiple packfiles are created, unless writing a MIDX (in which case a multi-pack bitmap is created). --pack-kept-objects Include objects in .keep files when repacking. Note that we still do not delete .keep packs after pack-objects finishes. This means that we may duplicate objects, but this makes the option safe to use when there are concurrent pushes or fetches. This option is generally only useful if you are writing bitmaps with -b or repack.writeBitmaps, as it ensures that the bitmapped packfile has the necessary objects. --keep-pack=<pack-name> Exclude the given pack from repacking. This is the equivalent of having .keep file on the pack. <pack-name> is the pack file name without leading directory (e.g. pack-123.pack). The option could be specified multiple times to keep multiple packs. --unpack-unreachable=<when> When loosening unreachable objects, do not bother loosening any objects older than <when>. This can be used to optimize out the write of any objects that would be immediately pruned by a follow-up git prune. -k, --keep-unreachable When used with -ad, any unreachable objects from existing packs will be appended to the end of the packfile instead of being removed. In addition, any unreachable loose objects will be packed (and their loose counterparts removed). -i, --delta-islands Pass the --delta-islands option to git-pack-objects, see git-pack-objects(1). -g=<factor>, --geometric=<factor> Arrange resulting pack structure so that each successive pack contains at least <factor> times the number of objects as the next-largest pack. git repack ensures this by determining a "cut" of packfiles that need to be repacked into one in order to ensure a geometric progression. It picks the smallest set of packfiles such that as many of the larger packfiles (by count of objects contained in that pack) may be left intact. Unlike other repack modes, the set of objects to pack is determined uniquely by the set of packs being "rolled-up"; in other words, the packs determined to need to be combined in order to restore a geometric progression. When --unpacked is specified, loose objects are implicitly included in this "roll-up", without respect to their reachability. This is subject to change in the future. This option (implying a drastically different repack mode) is not guaranteed to work with all other combinations of option to git repack. When writing a multi-pack bitmap, git repack selects the largest resulting pack as the preferred pack for object selection by the MIDX (see git-multi-pack-index(1)). -m, --write-midx Write a multi-pack index (see git-multi-pack-index(1)) containing the non-redundant packs.
# git repack > Pack unpacked objects in a Git repository. More information: https://git- > scm.com/docs/git-repack. * Pack unpacked objects in the current directory: `git repack` * Also remove redundant objects after packing: `git repack -d`
rev
The rev utility copies the specified files to standard output, reversing the order of characters in every line. If no files are specified, standard input is read. This utility is a line-oriented tool and it uses in-memory allocated buffer for a whole wide-char line. If the input file is huge and without line breaks then allocating the memory for the file may be unsuccessful. -h, --help Display help text and exit. -V, --version Print version and exit. -0, --zero Zero termination. Use the byte '\0' as line separator.
# rev > Reverse a line of text. More information: https://manned.org/rev. * Reverse the text string "hello": `echo "hello" | rev` * Reverse an entire file and print to `stdout`: `rev {{path/to/file}}`
logname
The logname utility shall write the user's login name to standard output. The login name shall be the string that would be returned by the getlogin() function defined in the System Interfaces volume of POSIX.1‐2017. Under the conditions where the getlogin() function would fail, the logname utility shall write a diagnostic message to standard error and exit with a non-zero exit status. None.
# logname > Shows the user's login name. More information: > https://www.gnu.org/software/coreutils/logname. * Display the currently logged in user's name: `logname`
true
Exit with a status code indicating success. --help display this help and exit --version output version information and exit NOTE: your shell may have its own version of true, which usually supersedes the version described here. Please refer to your shell's documentation for details about the options it supports.
# true > Returns a successful exit status code of 0. Use this with the || operator to > make a command always exit with 0. More information: > https://www.gnu.org/software/coreutils/true. * Return a successful exit code: `true`
sed
The sed utility is a stream editor that shall read one or more text files, make editing changes according to a script of editing commands, and write the results to standard output. The script shall be obtained from either the script operand string or a combination of the option-arguments from the -e script and -f script_file options. The sed utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except that the order of presentation of the -e and -f options is significant. The following options shall be supported: -e script Add the editing commands specified by the script option-argument to the end of the script of editing commands. -f script_file Add the editing commands in the file script_file to the end of the script of editing commands. -n Suppress the default output (in which each line, after it is examined for editing, is written to standard output). Only lines explicitly selected for output are written. If any -e or -f options are specified, the script of editing commands shall initially be empty. The commands specified by each -e or -f option shall be added to the script in the order specified. When each addition is made, if the previous addition (if any) was from a -e option, a <newline> shall be inserted before the new addition. The resulting script shall have the same properties as the script operand, described in the OPERANDS section.
# sed > Edit text in a scriptable manner. See also: `awk`, `ed`. More information: > https://keith.github.io/xcode-man-pages/sed.1.html. * Replace all `apple` (basic regex) occurrences with `mango` (basic regex) in all input lines and print the result to `stdout`: `{{command}} | sed 's/apple/mango/g'` * Execute a specific script [f]ile and print the result to `stdout`: `{{command}} | sed -f {{path/to/script_file.sed}}` * Replace all `apple` (extended regex) occurrences with `APPLE` (extended regex) in all input lines and print the result to `stdout`: `{{command}} | sed -E 's/(apple)/\U\1/g'` * Print just a first line to `stdout`: `{{command}} | sed -n '1p'` * Replace all `apple` (basic regex) occurrences with `mango` (basic regex) in a `file` and save a backup of the original to `file.bak`: `sed -i bak 's/apple/mango/g' {{path/to/file}}`
lsattr
lsattr lists the file attributes on a second extended file system. See chattr(1) for a description of the attributes and what they mean. -R Recursively list attributes of directories and their contents. -V Display the program version. -a List all files in directories, including files that start with `.'. -d List directories like other files, rather than listing their contents. -l Print the options using long names instead of single character abbreviations. -p List the file's project number. -v List the file's version/generation number.
# lsattr > List file attributes on a Linux filesystem. More information: > https://manned.org/lsattr. * Display the attributes of the files in the current directory: `lsattr` * List the attributes of files in a particular path: `lsattr {{path}}` * List file attributes recursively in the current and subsequent directories: `lsattr -R` * Show attributes of all the files in the current directory, including hidden ones: `lsattr -a` * Display attributes of directories in the current directory: `lsattr -d`
delta
The delta utility shall be used to permanently introduce into the named SCCS files changes that were made to the files retrieved by get (called the g-files, or generated files). The delta utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except that the -y option has an optional option-argument. This optional option-argument shall not be presented as a separate argument. The following options shall be supported: -r SID Uniquely identify which delta is to be made to the SCCS file. The use of this option shall be necessary only if two or more outstanding get commands for editing (get -e) on the same SCCS file were done by the same person (login name). The SID value specified with the -r option can be either the SID specified on the get command line or the SID to be made as reported by the get utility; see get(1p). -s Suppress the report to standard output of the activity associated with each file. See the STDOUT section. -n Specify retention of the edited g-file (normally removed at completion of delta processing). -g list Specify a list (see get(1p) for the definition of list) of deltas that shall be ignored when the file is accessed at the change level (SID) created by this delta. -m mrlist Specify a modification request (MR) number that the application shall supply as the reason for creating the new delta. This shall be used if the SCCS file has the v flag set; see admin(1p). If -m is not used and '-' is not specified as a file argument, and the standard input is a terminal, the prompt described in the STDOUT section shall be written to standard output before the standard input is read; if the standard input is not a terminal, no prompt shall be issued. MRs in a list shall be separated by <blank> characters or escaped <newline> characters. An unescaped <newline> shall terminate the MR list. The escape character is <backslash>. If the v flag has a value, it shall be taken to be the name of a program which validates the correctness of the MR numbers. If a non-zero exit status is returned from the MR number validation program, the delta utility shall terminate. (It is assumed that the MR numbers were not all valid.) -y[comment] Describe the reason for making the delta. The comment shall be an arbitrary group of lines that would meet the definition of a text file. Implementations shall support comments from zero to 512 bytes and may support longer values. A null string (specified as either -y, -y"", or in response to a prompt for a comment) shall be considered a valid comment. If -y is not specified and '-' is not specified as a file argument, and the standard input is a terminal, the prompt described in the STDOUT section shall be written to standard output before the standard input is read; if the standard input is not a terminal, no prompt shall be issued. An unescaped <newline> shall terminate the comment text. The escape character is <backslash>. The -y option shall be required if the file operand is specified as '-'. -p Write (to standard output) the SCCS file differences before and after the delta is applied in diff format; see diff(1p).
# delta > A viewer for Git and diff output. More information: > https://github.com/dandavison/delta. * Compare files or directories: `delta {{path/to/old_file_or_directory}} {{path/to/new_file_or_directory}}` * Compare files or directories, showing the line numbers: `delta --line-numbers {{path/to/old_file_or_directory}} {{path/to/new_file_or_directory}}` * Compare files or directories, showing the differences side by side: `delta --side-by-side {{path/to/old_file_or_directory}} {{path/to/new_file_or_directory}}` * Compare files or directories, ignoring any Git configuration settings: `delta --no-gitconfig {{path/to/old_file_or_directory}} {{path/to/new_file_or_directory}}` * Compare, rendering commit hashes, file names, and line numbers as hyperlinks, according to the hyperlink spec for terminal emulators: `delta --hyperlinks {{path/to/old_file_or_directory}} {{path/to/new_file_or_directory}}` * Display the current settings: `delta --show-config` * Display supported languages and associated file extensions: `delta --list-languages`
git-submodule
Inspects, updates and manages submodules. For more information about submodules, see gitsubmodules(7). -q, --quiet Only print error messages. --progress This option is only valid for add and update commands. Progress status is reported on the standard error stream by default when it is attached to a terminal, unless -q is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. --all This option is only valid for the deinit command. Unregister all submodules in the working tree. -b <branch>, --branch <branch> Branch of repository to add as submodule. The name of the branch is recorded as submodule.<name>.branch in .gitmodules for update --remote. A special value of . is used to indicate that the name of the branch in the submodule should be the same name as the current branch in the current repository. If the option is not specified, it defaults to the remote HEAD. -f, --force This option is only valid for add, deinit and update commands. When running add, allow adding an otherwise ignored submodule path. When running deinit the submodule working trees will be removed even if they contain local changes. When running update (only effective with the checkout procedure), throw away local changes in submodules when switching to a different commit; and always run a checkout operation in the submodule, even if the commit listed in the index of the containing repository matches the commit checked out in the submodule. --cached This option is only valid for status and summary commands. These commands typically use the commit found in the submodule HEAD, but with this option, the commit stored in the index is used instead. --files This option is only valid for the summary command. This command compares the commit in the index with that in the submodule HEAD when this option is used. -n, --summary-limit This option is only valid for the summary command. Limit the summary size (number of commits shown in total). Giving 0 will disable the summary; a negative number means unlimited (the default). This limit only applies to modified submodules. The size is always limited to 1 for added/deleted/typechanged submodules. --remote This option is only valid for the update command. Instead of using the superproject’s recorded SHA-1 to update the submodule, use the status of the submodule’s remote-tracking branch. The remote used is branch’s remote (branch.<name>.remote), defaulting to origin. The remote branch used defaults to the remote HEAD, but the branch name may be overridden by setting the submodule.<name>.branch option in either .gitmodules or .git/config (with .git/config taking precedence). This works for any of the supported update procedures (--checkout, --rebase, etc.). The only change is the source of the target SHA-1. For example, submodule update --remote --merge will merge upstream submodule changes into the submodules, while submodule update --merge will merge superproject gitlink changes into the submodules. In order to ensure a current tracking branch state, update --remote fetches the submodule’s remote repository before calculating the SHA-1. If you don’t want to fetch, you should use submodule update --remote --no-fetch. Use this option to integrate changes from the upstream subproject with your submodule’s current HEAD. Alternatively, you can run git pull from the submodule, which is equivalent except for the remote branch name: update --remote uses the default upstream repository and submodule.<name>.branch, while git pull uses the submodule’s branch.<name>.merge. Prefer submodule.<name>.branch if you want to distribute the default upstream branch with the superproject and branch.<name>.merge if you want a more native feel while working in the submodule itself. -N, --no-fetch This option is only valid for the update command. Don’t fetch new objects from the remote site. --checkout This option is only valid for the update command. Checkout the commit recorded in the superproject on a detached HEAD in the submodule. This is the default behavior, the main use of this option is to override submodule.$name.update when set to a value other than checkout. If the key submodule.$name.update is either not explicitly set or set to checkout, this option is implicit. --merge This option is only valid for the update command. Merge the commit recorded in the superproject into the current branch of the submodule. If this option is given, the submodule’s HEAD will not be detached. If a merge failure prevents this process, you will have to resolve the resulting conflicts within the submodule with the usual conflict resolution tools. If the key submodule.$name.update is set to merge, this option is implicit. --rebase This option is only valid for the update command. Rebase the current branch onto the commit recorded in the superproject. If this option is given, the submodule’s HEAD will not be detached. If a merge failure prevents this process, you will have to resolve these failures with git-rebase(1). If the key submodule.$name.update is set to rebase, this option is implicit. --init This option is only valid for the update command. Initialize all submodules for which "git submodule init" has not been called so far before updating. --name This option is only valid for the add command. It sets the submodule’s name to the given string instead of defaulting to its path. The name must be valid as a directory name and may not end with a /. --reference <repository> This option is only valid for add and update commands. These commands sometimes need to clone a remote repository. In this case, this option will be passed to the git-clone(1) command. NOTE: Do not use this option unless you have read the note for git-clone(1)'s --reference, --shared, and --dissociate options carefully. --dissociate This option is only valid for add and update commands. These commands sometimes need to clone a remote repository. In this case, this option will be passed to the git-clone(1) command. NOTE: see the NOTE for the --reference option. --recursive This option is only valid for foreach, update, status and sync commands. Traverse submodules recursively. The operation is performed not only in the submodules of the current repo, but also in any nested submodules inside those submodules (and so on). --depth This option is valid for add and update commands. Create a shallow clone with a history truncated to the specified number of revisions. See git-clone(1) --[no-]recommend-shallow This option is only valid for the update command. The initial clone of a submodule will use the recommended submodule.<name>.shallow as provided by the .gitmodules file by default. To ignore the suggestions use --no-recommend-shallow. -j <n>, --jobs <n> This option is only valid for the update command. Clone new submodules in parallel with as many jobs. Defaults to the submodule.fetchJobs option. --[no-]single-branch This option is only valid for the update command. Clone only one branch during update: HEAD or one specified by --branch. <path>... Paths to submodule(s). When specified this will restrict the command to only operate on the submodules found at the specified paths. (This argument is required with add).
# git submodule > Inspects, updates and manages submodules. More information: https://git- > scm.com/docs/git-submodule. * Install a repository's specified submodules: `git submodule update --init --recursive` * Add a Git repository as a submodule: `git submodule add {{repository_url}}` * Add a Git repository as a submodule at the specified directory: `git submodule add {{repository_url}} {{path/to/directory}}` * Update every submodule to its latest commit: `git submodule foreach git pull`
git-send-email
Takes the patches given on the command line and emails them out. Patches can be specified as files, directories (which will send all files in the directory), or directly as a revision list. In the last case, any format accepted by git-format-patch(1) can be passed to git send-email, as well as options understood by git-format-patch(1). The header of the email is configurable via command-line options. If not specified on the command line, the user will be prompted with a ReadLine enabled interface to provide the necessary information. There are two formats accepted for patch files: 1. mbox format files This is what git-format-patch(1) generates. Most headers and MIME formatting are ignored. 2. The original format used by Greg Kroah-Hartman’s send_lots_of_email.pl script This format expects the first line of the file to contain the "Cc:" value and the "Subject:" of the message as the second line. Composing --annotate Review and edit each patch you’re about to send. Default is the value of sendemail.annotate. See the CONFIGURATION section for sendemail.multiEdit. --bcc=<address>,... Specify a "Bcc:" value for each email. Default is the value of sendemail.bcc. This option may be specified multiple times. --cc=<address>,... Specify a starting "Cc:" value for each email. Default is the value of sendemail.cc. This option may be specified multiple times. --compose Invoke a text editor (see GIT_EDITOR in git-var(1)) to edit an introductory message for the patch series. When --compose is used, git send-email will use the From, Subject, and In-Reply-To headers specified in the message. If the body of the message (what you type after the headers and a blank line) only contains blank (or Git: prefixed) lines, the summary won’t be sent, but From, Subject, and In-Reply-To headers will be used unless they are removed. Missing From or In-Reply-To headers will be prompted for. See the CONFIGURATION section for sendemail.multiEdit. --from=<address> Specify the sender of the emails. If not specified on the command line, the value of the sendemail.from configuration option is used. If neither the command-line option nor sendemail.from are set, then the user will be prompted for the value. The default for the prompt will be the value of GIT_AUTHOR_IDENT, or GIT_COMMITTER_IDENT if that is not set, as returned by "git var -l". --reply-to=<address> Specify the address where replies from recipients should go to. Use this if replies to messages should go to another address than what is specified with the --from parameter. --in-reply-to=<identifier> Make the first mail (or all the mails with --no-thread) appear as a reply to the given Message-ID, which avoids breaking threads to provide a new patch series. The second and subsequent emails will be sent as replies according to the --[no-]chain-reply-to setting. So for example when --thread and --no-chain-reply-to are specified, the second and subsequent patches will be replies to the first one like in the illustration below where [PATCH v2 0/3] is in reply to [PATCH 0/2]: [PATCH 0/2] Here is what I did... [PATCH 1/2] Clean up and tests [PATCH 2/2] Implementation [PATCH v2 0/3] Here is a reroll [PATCH v2 1/3] Clean up [PATCH v2 2/3] New tests [PATCH v2 3/3] Implementation Only necessary if --compose is also set. If --compose is not set, this will be prompted for. --subject=<string> Specify the initial subject of the email thread. Only necessary if --compose is also set. If --compose is not set, this will be prompted for. --to=<address>,... Specify the primary recipient of the emails generated. Generally, this will be the upstream maintainer of the project involved. Default is the value of the sendemail.to configuration value; if that is unspecified, and --to-cmd is not specified, this will be prompted for. This option may be specified multiple times. --8bit-encoding=<encoding> When encountering a non-ASCII message or subject that does not declare its encoding, add headers/quoting to indicate it is encoded in <encoding>. Default is the value of the sendemail.assume8bitEncoding; if that is unspecified, this will be prompted for if any non-ASCII files are encountered. Note that no attempts whatsoever are made to validate the encoding. --compose-encoding=<encoding> Specify encoding of compose message. Default is the value of the sendemail.composeencoding; if that is unspecified, UTF-8 is assumed. --transfer-encoding=(7bit|8bit|quoted-printable|base64|auto) Specify the transfer encoding to be used to send the message over SMTP. 7bit will fail upon encountering a non-ASCII message. quoted-printable can be useful when the repository contains files that contain carriage returns, but makes the raw patch email file (as saved from a MUA) much harder to inspect manually. base64 is even more fool proof, but also even more opaque. auto will use 8bit when possible, and quoted-printable otherwise. Default is the value of the sendemail.transferEncoding configuration value; if that is unspecified, default to auto. --xmailer, --no-xmailer Add (or prevent adding) the "X-Mailer:" header. By default, the header is added, but it can be turned off by setting the sendemail.xmailer configuration variable to false. Sending --envelope-sender=<address> Specify the envelope sender used to send the emails. This is useful if your default address is not the address that is subscribed to a list. In order to use the From address, set the value to "auto". If you use the sendmail binary, you must have suitable privileges for the -f parameter. Default is the value of the sendemail.envelopeSender configuration variable; if that is unspecified, choosing the envelope sender is left to your MTA. --sendmail-cmd=<command> Specify a command to run to send the email. The command should be sendmail-like; specifically, it must support the -i option. The command will be executed in the shell if necessary. Default is the value of sendemail.sendmailcmd. If unspecified, and if --smtp-server is also unspecified, git-send-email will search for sendmail in /usr/sbin, /usr/lib and $PATH. --smtp-encryption=<encryption> Specify in what way encrypting begins for the SMTP connection. Valid values are ssl and tls. Any other value reverts to plain (unencrypted) SMTP, which defaults to port 25. Despite the names, both values will use the same newer version of TLS, but for historic reasons have these names. ssl refers to "implicit" encryption (sometimes called SMTPS), that uses port 465 by default. tls refers to "explicit" encryption (often known as STARTTLS), that uses port 25 by default. Other ports might be used by the SMTP server, which are not the default. Commonly found alternative port for tls and unencrypted is 587. You need to check your provider’s documentation or your server configuration to make sure for your own case. Default is the value of sendemail.smtpEncryption. --smtp-domain=<FQDN> Specifies the Fully Qualified Domain Name (FQDN) used in the HELO/EHLO command to the SMTP server. Some servers require the FQDN to match your IP address. If not set, git send-email attempts to determine your FQDN automatically. Default is the value of sendemail.smtpDomain. --smtp-auth=<mechanisms> Whitespace-separated list of allowed SMTP-AUTH mechanisms. This setting forces using only the listed mechanisms. Example: $ git send-email --smtp-auth="PLAIN LOGIN GSSAPI" ... If at least one of the specified mechanisms matches the ones advertised by the SMTP server and if it is supported by the utilized SASL library, the mechanism is used for authentication. If neither sendemail.smtpAuth nor --smtp-auth is specified, all mechanisms supported by the SASL library can be used. The special value none maybe specified to completely disable authentication independently of --smtp-user --smtp-pass[=<password>] Password for SMTP-AUTH. The argument is optional: If no argument is specified, then the empty string is used as the password. Default is the value of sendemail.smtpPass, however --smtp-pass always overrides this value. Furthermore, passwords need not be specified in configuration files or on the command line. If a username has been specified (with --smtp-user or a sendemail.smtpUser), but no password has been specified (with --smtp-pass or sendemail.smtpPass), then a password is obtained using git-credential. --no-smtp-auth Disable SMTP authentication. Short hand for --smtp-auth=none --smtp-server=<host> If set, specifies the outgoing SMTP server to use (e.g. smtp.example.com or a raw IP address). If unspecified, and if --sendmail-cmd is also unspecified, the default is to search for sendmail in /usr/sbin, /usr/lib and $PATH if such a program is available, falling back to localhost otherwise. For backward compatibility, this option can also specify a full pathname of a sendmail-like program instead; the program must support the -i option. This method does not support passing arguments or using plain command names. For those use cases, consider using --sendmail-cmd instead. --smtp-server-port=<port> Specifies a port different from the default port (SMTP servers typically listen to smtp port 25, but may also listen to submission port 587, or the common SSL smtp port 465); symbolic port names (e.g. "submission" instead of 587) are also accepted. The port can also be set with the sendemail.smtpServerPort configuration variable. --smtp-server-option=<option> If set, specifies the outgoing SMTP server option to use. Default value can be specified by the sendemail.smtpServerOption configuration option. The --smtp-server-option option must be repeated for each option you want to pass to the server. Likewise, different lines in the configuration files must be used for each option. --smtp-ssl Legacy alias for --smtp-encryption ssl. --smtp-ssl-cert-path Path to a store of trusted CA certificates for SMTP SSL/TLS certificate validation (either a directory that has been processed by c_rehash, or a single file containing one or more PEM format certificates concatenated together: see verify(1) -CAfile and -CApath for more information on these). Set it to an empty string to disable certificate verification. Defaults to the value of the sendemail.smtpsslcertpath configuration variable, if set, or the backing SSL library’s compiled-in default otherwise (which should be the best choice on most platforms). --smtp-user=<user> Username for SMTP-AUTH. Default is the value of sendemail.smtpUser; if a username is not specified (with --smtp-user or sendemail.smtpUser), then authentication is not attempted. --smtp-debug=0|1 Enable (1) or disable (0) debug output. If enabled, SMTP commands and replies will be printed. Useful to debug TLS connection and authentication problems. --batch-size=<num> Some email servers (e.g. smtp.163.com) limit the number emails to be sent per session (connection) and this will lead to a failure when sending many messages. With this option, send-email will disconnect after sending $<num> messages and wait for a few seconds (see --relogin-delay) and reconnect, to work around such a limit. You may want to use some form of credential helper to avoid having to retype your password every time this happens. Defaults to the sendemail.smtpBatchSize configuration variable. --relogin-delay=<int> Waiting $<int> seconds before reconnecting to SMTP server. Used together with --batch-size option. Defaults to the sendemail.smtpReloginDelay configuration variable. Automating --no-[to|cc|bcc] Clears any list of "To:", "Cc:", "Bcc:" addresses previously set via config. --no-identity Clears the previously read value of sendemail.identity set via config, if any. --to-cmd=<command> Specify a command to execute once per patch file which should generate patch file specific "To:" entries. Output of this command must be single email address per line. Default is the value of sendemail.tocmd configuration value. --cc-cmd=<command> Specify a command to execute once per patch file which should generate patch file specific "Cc:" entries. Output of this command must be single email address per line. Default is the value of sendemail.ccCmd configuration value. --header-cmd=<command> Specify a command that is executed once per outgoing message and output RFC 2822 style header lines to be inserted into them. When the sendemail.headerCmd configuration variable is set, its value is always used. When --header-cmd is provided at the command line, its value takes precedence over the sendemail.headerCmd configuration variable. --no-header-cmd Disable any header command in use. --[no-]chain-reply-to If this is set, each email will be sent as a reply to the previous email sent. If disabled with "--no-chain-reply-to", all emails after the first will be sent as replies to the first email sent. When using this, it is recommended that the first file given be an overview of the entire patch series. Disabled by default, but the sendemail.chainReplyTo configuration variable can be used to enable it. --identity=<identity> A configuration identity. When given, causes values in the sendemail.<identity> subsection to take precedence over values in the sendemail section. The default identity is the value of sendemail.identity. --[no-]signed-off-by-cc If this is set, add emails found in the Signed-off-by trailer or Cc: lines to the cc list. Default is the value of sendemail.signedoffbycc configuration value; if that is unspecified, default to --signed-off-by-cc. --[no-]cc-cover If this is set, emails found in Cc: headers in the first patch of the series (typically the cover letter) are added to the cc list for each email set. Default is the value of sendemail.cccover configuration value; if that is unspecified, default to --no-cc-cover. --[no-]to-cover If this is set, emails found in To: headers in the first patch of the series (typically the cover letter) are added to the to list for each email set. Default is the value of sendemail.tocover configuration value; if that is unspecified, default to --no-to-cover. --suppress-cc=<category> Specify an additional category of recipients to suppress the auto-cc of: • author will avoid including the patch author. • self will avoid including the sender. • cc will avoid including anyone mentioned in Cc lines in the patch header except for self (use self for that). • bodycc will avoid including anyone mentioned in Cc lines in the patch body (commit message) except for self (use self for that). • sob will avoid including anyone mentioned in the Signed-off-by trailers except for self (use self for that). • misc-by will avoid including anyone mentioned in Acked-by, Reviewed-by, Tested-by and other "-by" lines in the patch body, except Signed-off-by (use sob for that). • cccmd will avoid running the --cc-cmd. • body is equivalent to sob + bodycc + misc-by. • all will suppress all auto cc values. Default is the value of sendemail.suppresscc configuration value; if that is unspecified, default to self if --suppress-from is specified, as well as body if --no-signed-off-cc is specified. --[no-]suppress-from If this is set, do not add the From: address to the cc: list. Default is the value of sendemail.suppressFrom configuration value; if that is unspecified, default to --no-suppress-from. --[no-]thread If this is set, the In-Reply-To and References headers will be added to each email sent. Whether each mail refers to the previous email (deep threading per git format-patch wording) or to the first email (shallow threading) is governed by "--[no-]chain-reply-to". If disabled with "--no-thread", those headers will not be added (unless specified with --in-reply-to). Default is the value of the sendemail.thread configuration value; if that is unspecified, default to --thread. It is up to the user to ensure that no In-Reply-To header already exists when git send-email is asked to add it (especially note that git format-patch can be configured to do the threading itself). Failure to do so may not produce the expected result in the recipient’s MUA. Administering --confirm=<mode> Confirm just before sending: • always will always confirm before sending • never will never confirm before sending • cc will confirm before sending when send-email has automatically added addresses from the patch to the Cc list • compose will confirm before sending the first message when using --compose. • auto is equivalent to cc + compose Default is the value of sendemail.confirm configuration value; if that is unspecified, default to auto unless any of the suppress options have been specified, in which case default to compose. --dry-run Do everything except actually send the emails. --[no-]format-patch When an argument may be understood either as a reference or as a file name, choose to understand it as a format-patch argument (--format-patch) or as a file name (--no-format-patch). By default, when such a conflict occurs, git send-email will fail. --quiet Make git-send-email less verbose. One line per email should be all that is output. --[no-]validate Perform sanity checks on patches. Currently, validation means the following: • Invoke the sendemail-validate hook if present (see githooks(5)). • Warn of patches that contain lines longer than 998 characters unless a suitable transfer encoding (auto, base64, or quoted-printable) is used; this is due to SMTP limits as described by http://www.ietf.org/rfc/rfc5322.txt . Default is the value of sendemail.validate; if this is not set, default to --validate. --force Send emails even if safety checks would prevent it. Information --dump-aliases Instead of the normal operation, dump the shorthand alias names from the configured alias file(s), one per line in alphabetical order. Note, this only includes the alias name and not its expanded email addresses. See sendemail.aliasesfile for more information about aliases.
# git send-email > Send a collection of patches as emails. Patches can be specified as files, > directions, or a revision list. More information: https://git- > scm.com/docs/git-send-email. * Send the last commit in the current branch: `git send-email -1` * Send a given commit: `git send-email -1 {{commit}}` * Send multiple (e.g. 10) commits in the current branch: `git send-email {{-10}}` * Send an introductory email message for the patch series: `git send-email -{{number_of_commits}} --compose` * Review and edit the email message for each patch you're about to send: `git send-email -{{number_of_commits}} --annotate`
git-checkout
Updates files in the working tree to match the version in the index or the specified tree. If no pathspec was given, git checkout will also update HEAD to set the specified branch as the current branch. git checkout [<branch>] To prepare for working on <branch>, switch to it by updating the index and the files in the working tree, and by pointing HEAD at the branch. Local modifications to the files in the working tree are kept, so that they can be committed to the <branch>. If <branch> is not found but there does exist a tracking branch in exactly one remote (call it <remote>) with a matching name and --no-guess is not specified, treat as equivalent to $ git checkout -b <branch> --track <remote>/<branch> You could omit <branch>, in which case the command degenerates to "check out the current branch", which is a glorified no-op with rather expensive side-effects to show only the tracking information, if exists, for the current branch. git checkout -b|-B <new-branch> [<start-point>] Specifying -b causes a new branch to be created as if git-branch(1) were called and then checked out. In this case you can use the --track or --no-track options, which will be passed to git branch. As a convenience, --track without -b implies branch creation; see the description of --track below. If -B is given, <new-branch> is created if it doesn’t exist; otherwise, it is reset. This is the transactional equivalent of $ git branch -f <branch> [<start-point>] $ git checkout <branch> that is to say, the branch is not reset/created unless "git checkout" is successful. git checkout --detach [<branch>], git checkout [--detach] <commit> Prepare to work on top of <commit>, by detaching HEAD at it (see "DETACHED HEAD" section), and updating the index and the files in the working tree. Local modifications to the files in the working tree are kept, so that the resulting working tree will be the state recorded in the commit plus the local modifications. When the <commit> argument is a branch name, the --detach option can be used to detach HEAD at the tip of the branch (git checkout <branch> would check out that branch without detaching HEAD). Omitting <branch> detaches HEAD at the tip of the current branch. git checkout [-f|--ours|--theirs|-m|--conflict=<style>] [<tree-ish>] [--] <pathspec>..., git checkout [-f|--ours|--theirs|-m|--conflict=<style>] [<tree-ish>] --pathspec-from-file=<file> [--pathspec-file-nul] Overwrite the contents of the files that match the pathspec. When the <tree-ish> (most often a commit) is not given, overwrite working tree with the contents in the index. When the <tree-ish> is given, overwrite both the index and the working tree with the contents at the <tree-ish>. The index may contain unmerged entries because of a previous failed merge. By default, if you try to check out such an entry from the index, the checkout operation will fail and nothing will be checked out. Using -f will ignore these unmerged entries. The contents from a specific side of the merge can be checked out of the index by using --ours or --theirs. With -m, changes made to the working tree file can be discarded to re-create the original conflicted merge result. git checkout (-p|--patch) [<tree-ish>] [--] [<pathspec>...] This is similar to the previous mode, but lets you use the interactive interface to show the "diff" output and choose which hunks to use in the result. See below for the description of --patch option. -q, --quiet Quiet, suppress feedback messages. --progress, --no-progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless --quiet is specified. This flag enables progress reporting even if not attached to a terminal, regardless of --quiet. -f, --force When switching branches, proceed even if the index or the working tree differs from HEAD, and even if there are untracked files in the way. This is used to throw away local changes and any untracked files or directories that are in the way. When checking out paths from the index, do not fail upon unmerged entries; instead, unmerged entries are ignored. --ours, --theirs When checking out paths from the index, check out stage #2 (ours) or #3 (theirs) for unmerged paths. Note that during git rebase and git pull --rebase, ours and theirs may appear swapped; --ours gives the version from the branch the changes are rebased onto, while --theirs gives the version from the branch that holds your work that is being rebased. This is because rebase is used in a workflow that treats the history at the remote as the shared canonical one, and treats the work done on the branch you are rebasing as the third-party work to be integrated, and you are temporarily assuming the role of the keeper of the canonical history during the rebase. As the keeper of the canonical history, you need to view the history from the remote as ours (i.e. "our shared canonical history"), while what you did on your side branch as theirs (i.e. "one contributor’s work on top of it"). -b <new-branch> Create a new branch named <new-branch>, start it at <start-point>, and check the resulting branch out; see git-branch(1) for details. -B <new-branch> Creates the branch <new-branch>, start it at <start-point>; if it already exists, then reset it to <start-point>. And then check the resulting branch out. This is equivalent to running "git branch" with "-f" followed by "git checkout" of that branch; see git-branch(1) for details. -t, --track[=(direct|inherit)] When creating a new branch, set up "upstream" configuration. See "--track" in git-branch(1) for details. If no -b option is given, the name of the new branch will be derived from the remote-tracking branch, by looking at the local part of the refspec configured for the corresponding remote, and then stripping the initial part up to the "*". This would tell us to use hack as the local branch when branching off of origin/hack (or remotes/origin/hack, or even refs/remotes/origin/hack). If the given name has no slash, or the above guessing results in an empty name, the guessing is aborted. You can explicitly give a name with -b in such a case. --no-track Do not set up "upstream" configuration, even if the branch.autoSetupMerge configuration variable is true. --guess, --no-guess If <branch> is not found but there does exist a tracking branch in exactly one remote (call it <remote>) with a matching name, treat as equivalent to $ git checkout -b <branch> --track <remote>/<branch> If the branch exists in multiple remotes and one of them is named by the checkout.defaultRemote configuration variable, we’ll use that one for the purposes of disambiguation, even if the <branch> isn’t unique across all remotes. Set it to e.g. checkout.defaultRemote=origin to always checkout remote branches from there if <branch> is ambiguous but exists on the origin remote. See also checkout.defaultRemote in git-config(1). --guess is the default behavior. Use --no-guess to disable it. The default behavior can be set via the checkout.guess configuration variable. -l Create the new branch’s reflog; see git-branch(1) for details. -d, --detach Rather than checking out a branch to work on it, check out a commit for inspection and discardable experiments. This is the default behavior of git checkout <commit> when <commit> is not a branch name. See the "DETACHED HEAD" section below for details. --orphan <new-branch> Create a new orphan branch, named <new-branch>, started from <start-point> and switch to it. The first commit made on this new branch will have no parents and it will be the root of a new history totally disconnected from all the other branches and commits. The index and the working tree are adjusted as if you had previously run git checkout <start-point>. This allows you to start a new history that records a set of paths similar to <start-point> by easily running git commit -a to make the root commit. This can be useful when you want to publish the tree from a commit without exposing its full history. You might want to do this to publish an open source branch of a project whose current tree is "clean", but whose full history contains proprietary or otherwise encumbered bits of code. If you want to start a disconnected history that records a set of paths that is totally different from the one of <start-point>, then you should clear the index and the working tree right after creating the orphan branch by running git rm -rf . from the top level of the working tree. Afterwards you will be ready to prepare your new files, repopulating the working tree, by copying them from elsewhere, extracting a tarball, etc. --ignore-skip-worktree-bits In sparse checkout mode, git checkout -- <paths> would update only entries matched by <paths> and sparse patterns in $GIT_DIR/info/sparse-checkout. This option ignores the sparse patterns and adds back any files in <paths>. -m, --merge When switching branches, if you have local modifications to one or more files that are different between the current branch and the branch to which you are switching, the command refuses to switch branches in order to preserve your modifications in context. However, with this option, a three-way merge between the current branch, your working tree contents, and the new branch is done, and you will be on the new branch. When a merge conflict happens, the index entries for conflicting paths are left unmerged, and you need to resolve the conflicts and mark the resolved paths with git add (or git rm if the merge should result in deletion of the path). When checking out paths from the index, this option lets you recreate the conflicted merge in the specified paths. When switching branches with --merge, staged changes may be lost. --conflict=<style> The same as --merge option above, but changes the way the conflicting hunks are presented, overriding the merge.conflictStyle configuration variable. Possible values are "merge" (default), "diff3", and "zdiff3". -p, --patch Interactively select hunks in the difference between the <tree-ish> (or the index, if unspecified) and the working tree. The chosen hunks are then applied in reverse to the working tree (and if a <tree-ish> was specified, the index). This means that you can use git checkout -p to selectively discard edits from your current working tree. See the “Interactive Mode” section of git-add(1) to learn how to operate the --patch mode. Note that this option uses the no overlay mode by default (see also --overlay), and currently doesn’t support overlay mode. --ignore-other-worktrees git checkout refuses when the wanted ref is already checked out by another worktree. This option makes it check the ref out anyway. In other words, the ref can be held by more than one worktree. --overwrite-ignore, --no-overwrite-ignore Silently overwrite ignored files when switching branches. This is the default behavior. Use --no-overwrite-ignore to abort the operation when the new branch contains ignored files. --recurse-submodules, --no-recurse-submodules Using --recurse-submodules will update the content of all active submodules according to the commit recorded in the superproject. If local modifications in a submodule would be overwritten the checkout will fail unless -f is used. If nothing (or --no-recurse-submodules) is used, submodules working trees will not be updated. Just like git-submodule(1), this will detach HEAD of the submodule. --overlay, --no-overlay In the default overlay mode, git checkout never removes files from the index or the working tree. When specifying --no-overlay, files that appear in the index and working tree, but not in <tree-ish> are removed, to make them match <tree-ish> exactly. --pathspec-from-file=<file> Pathspec is passed in <file> instead of commandline args. If <file> is exactly - then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable core.quotePath (see git-config(1)). See also --pathspec-file-nul and global --literal-pathspecs. --pathspec-file-nul Only meaningful with --pathspec-from-file. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes). <branch> Branch to checkout; if it refers to a branch (i.e., a name that, when prepended with "refs/heads/", is a valid ref), then that branch is checked out. Otherwise, if it refers to a valid commit, your HEAD becomes "detached" and you are no longer on any branch (see below for details). You can use the @{-N} syntax to refer to the N-th last branch/commit checked out using "git checkout" operation. You may also specify - which is synonymous to @{-1}. As a special case, you may use A...B as a shortcut for the merge base of A and B if there is exactly one merge base. You can leave out at most one of A and B, in which case it defaults to HEAD. <new-branch> Name for the new branch. <start-point> The name of a commit at which to start the new branch; see git-branch(1) for details. Defaults to HEAD. As a special case, you may use "A...B" as a shortcut for the merge base of A and B if there is exactly one merge base. You can leave out at most one of A and B, in which case it defaults to HEAD. <tree-ish> Tree to checkout from (when paths are given). If not specified, the index will be used. As a special case, you may use "A...B" as a shortcut for the merge base of A and B if there is exactly one merge base. You can leave out at most one of A and B, in which case it defaults to HEAD. -- Do not interpret any more arguments as options. <pathspec>... Limits the paths affected by the operation. For more details, see the pathspec entry in gitglossary(7).
# git checkout > Checkout a branch or paths to the working tree. More information: > https://git-scm.com/docs/git-checkout. * Create and switch to a new branch: `git checkout -b {{branch_name}}` * Create and switch to a new branch based on a specific reference (branch, remote/branch, tag are examples of valid references): `git checkout -b {{branch_name}} {{reference}}` * Switch to an existing local branch: `git checkout {{branch_name}}` * Switch to the previously checked out branch: `git checkout -` * Switch to an existing remote branch: `git checkout --track {{remote_name}}/{{branch_name}}` * Discard all unstaged changes in the current directory (see `git reset` for more undo-like commands): `git checkout .` * Discard unstaged changes to a given file: `git checkout {{path/to/file}}` * Replace a file in the current directory with the version of it committed in a given branch: `git checkout {{branch_name}} -- {{path/to/file}}`
git-show-ref
Displays references available in a local repository along with the associated commit IDs. Results can be filtered using a pattern and tags can be dereferenced into object IDs. Additionally, it can be used to test whether a particular ref exists. By default, shows the tags, heads, and remote refs. The --exclude-existing form is a filter that does the inverse. It reads refs from stdin, one ref per line, and shows those that don’t exist in the local repository. Use of this utility is encouraged in favor of directly accessing files under the .git directory. --head Show the HEAD reference, even if it would normally be filtered out. --heads, --tags Limit to "refs/heads" and "refs/tags", respectively. These options are not mutually exclusive; when given both, references stored in "refs/heads" and "refs/tags" are displayed. -d, --dereference Dereference tags into object IDs as well. They will be shown with {caret}{} appended. -s, --hash[=<n>] Only show the OID, not the reference name. When combined with --dereference, the dereferenced tag will still be shown after the OID. --verify Enable stricter reference checking by requiring an exact ref path. Aside from returning an error code of 1, it will also print an error message if --quiet was not specified. --abbrev[=<n>] Abbreviate the object name. When using --hash, you do not have to say --hash --abbrev; --hash=n would do. -q, --quiet Do not print any results to stdout. When combined with --verify, this can be used to silently check if a reference exists. --exclude-existing[=<pattern>] Make git show-ref act as a filter that reads refs from stdin of the form ^(?:<anything>\s)?<refname>(?:\^{})?$ and performs the following actions on each: (1) strip {caret}{} at the end of line if any; (2) ignore if pattern is provided and does not head-match refname; (3) warn if refname is not a well-formed refname and skip; (4) ignore if refname is a ref that exists in the local repository; (5) otherwise output the line. <pattern>... Show references matching one or more patterns. Patterns are matched from the end of the full name, and only complete parts are matched, e.g. master matches refs/heads/master, refs/remotes/origin/master, refs/tags/jedi/master but not refs/heads/mymaster or refs/remotes/master/jedi.
# git show-ref > Git command for listing references. More information: https://git- > scm.com/docs/git-show-ref. * Show all refs in the repository: `git show-ref` * Show only heads references: `git show-ref --heads` * Show only tags references: `git show-ref --tags` * Verify that a given reference exists: `git show-ref --verify {{path/to/ref}}`
tbl
The GNU implementation of tbl is part of the groff(1) document formatting system. tbl is a troff(1) preprocessor that translates descriptions of tables embedded in roff(7) input files into the language understood by troff. It copies the contents of each file to the standard output stream, except that lines between .TS and .TE are interpreted as table descriptions. While GNU tbl's input syntax is highly compatible with AT&T tbl, the output GNU tbl produces cannot be processed by AT&T troff; GNU troff (or a troff implementing any GNU extensions employed) must be used. Normally, tbl is not executed directly by the user, but invoked by specifying the -t option to groff(1). If no file operands are given on the command line, or if file is “-”, tbl reads the standard input stream. Overview tbl expects to find table descriptions between input lines that begin with .TS (table start) and .TE (table end). Each such table region encloses one or more table descriptions. Within a table region, table descriptions beyond the first must each be preceded by an input line beginning with .T&. This mechanism does not start a new table region; all table descriptions are treated as part of their .TS/.TE enclosure, even if they are boxed or have column headings that repeat on subsequent pages (see below). (Experienced roff users should observe that tbl is not a roff language interpreter: the default control character must be used, and no spaces or tabs are permitted between the control character and the macro name. These tbl input tokens remain as-is in the output, where they become ordinary macro calls. Macro packages often define TS, T&, and TE macros to handle issues of table placement on the page. tbl produces groff code to define these macros as empty if their definitions do not exist when the formatter encounters a table region.) Each table region may begin with region options, and must contain one or more table definitions; each table definition contains a format specification followed by one or more input lines (rows) of entries. These entries comprise the table data. Region options The line immediately following the .TS token may specify region options, keywords that influence the interpretation or rendering of the region as a whole or all table entries within it indiscriminately. They must be separated by commas, spaces, or tabs. Those that require a parenthesized argument permit spaces and tabs between the option's name and the opening parenthesis. Options accumulate and cannot be unset within a region once declared; if an option that takes a parameter is repeated, the last occurrence controls. If present, the set of region options must be terminated with a semicolon (;). Any of the allbox, box, doublebox, frame, and doubleframe region options makes a table “boxed” for the purpose of later discussion. allbox Enclose each table entry in a box; implies box. box Enclose the entire table region in a box. As a GNU extension, the alternative option name frame is also recognized. center Center the table region with respect to the current indentation and line length; the default is to left-align it. As a GNU extension, the alternative option name centre is also recognized. decimalpoint(c) Recognize character c as the decimal separator in columns using the N (numeric) classifier (see subsection “Column classifiers” below). This is a GNU extension. delim(xy) Recognize characters x and y as start and end delimiters, respectively, for eqn(1) input, and ignore input between them. x and y need not be distinct. doublebox Enclose the entire table region in a double box; implies box. As a GNU extension, the alternative option name doubleframe is also recognized. expand Spread the table horizontally to fill the available space (line length minus indentation) by increasing column separation. Ordinarily, a table is made only as wide as necessary to accommodate the widths of its entries and its column separations (whether specified or default). When expand applies to a table that exceeds the available horizontal space, column separation is reduced as far as necessary (even to zero). tbl produces groff input that issues a diagnostic if such compression occurs. The column modifier x (see below) overrides this option. linesize(n) Draw lines or rules (e.g., from box) with a thickness of n points. The default is the current type size when the region begins. This option is ignored on terminal devices. nokeep Don't use roff diversions to manage page breaks. Normally, tbl employs them to avoid breaking a page within a table row. This usage can sometimes interact badly with macro packages' own use of diversions—when footnotes, for example, are employed. This is a GNU extension. nospaces Ignore leading and trailing spaces in table entries. This is a GNU extension. nowarn Suppress diagnostic messages produced at document formatting time when the line or page lengths are inadequate to contain a table row. This is a GNU extension. tab(c) Use the character c instead of a tab to separate entries in a row of table data. Table format specification The table format specification is mandatory: it determines the number of columns in the table and directs how the entries within it are to be typeset. The format specification is a series of column descriptors. Each descriptor encodes a classifier followed by zero or more modifiers. Classifiers are letters (recognized case-insensitively) or punctuation symbols; modifiers consist of or begin with letters or numerals. Spaces, tabs, newlines, and commas separate descriptors. Newlines and commas are special; they apply the descriptors following them to a subsequent row of the table. (This enables column headings to be centered or emboldened while the table entries for the data are not, for instance.) We term the resulting group of column descriptors a row definition. Within a row definition, separation between column descriptors (by spaces or tabs) is often optional; only some modifiers, described below, make separation necessary. Each column descriptor begins with a mandatory classifier, a character that selects from one of several arrangements. Some determine the positioning of table entries within a rectangular cell: centered, left-aligned, numeric (aligned to a configurable decimal separator), and so on. Others perform special operations like drawing lines or spanning entries from adjacent cells in the table. Except for “|”, any classifier can be followed by one or more modifiers; some of these accept an argument, which in GNU tbl can be parenthesized. Modifiers select fonts, set the type size, and perform other tasks described below. The format specification can occupy multiple input lines, but must conclude with a dot “.” followed by a newline. Each row definition is applied in turn to one row of the table. The last row definition is applied to rows of table data in excess of the row definitions. For clarity in this document's examples, we shall write classifiers in uppercase and modifiers in lowercase. Thus, “CbCb,LR.” defines two rows of two columns. The first row's entries are centered and boldfaced; the second and any further rows' first and second columns are left- and right-aligned, respectively. The row definition with the most column descriptors determines the number of columns in the table; any row definition with fewer is implicitly extended on the right-hand side with L classifiers as many times as necessary to make the table rectangular. Column classifiers The L, R, and C classifiers are the easiest to understand and use. A, a Center longest entry in this column, left-align remaining entries in the column with respect to the centered entry, then indent all entries by one en. Such “alphabetic” entries (hence the name of the classifier) can be used in the same column as L-classified entries, as in “LL,AR.”. The A entries are often termed “sub-columns” due to their indentation. C, c Center entry within the column. L, l Left-align entry within the column. N, n Numerically align entry in the column. tbl aligns columns of numbers vertically at the units place. If multiple decimal separators are adjacent to a digit, it uses the rightmost one for vertical alignment. If there is no decimal separator, the rightmost digit is used for vertical alignment; otherwise, tbl centers the entry within the column. The roff dummy character \& in an entry marks the glyph preceding it (if any) as the units place; if multiple instances occur in the data, the leftmost is used for alignment. If N-classified entries share a column with L or R entries, tbl centers the widest N entry with respect to the widest L or R entry, preserving the alignment of N entries with respect to each other. The appearance of eqn equations within N-classified columns can be troublesome due to the foregoing textual scan for a decimal separator. Use the delim region option to make tbl ignore the data within eqn delimiters for that purpose. R, r Right-align entry within the column. S, s Span previous entry on the left into this column. ^ Span entry in the same column from the previous row into this row. _, - Replace table entry with a horizontal rule. An empty table entry is expected to correspond to this classifier; if data are found there, tbl issues a diagnostic message. = Replace table entry with a double horizontal rule. An empty table entry is expected to correspond to this classifier; if data are found there, tbl issues a diagnostic message. | Place a vertical rule (line) on the corresponding row of the table (if two of these are adjacent, a double vertical rule). This classifier does not contribute to the column count and no table entries correspond to it. A | to the left of the first column descriptor or to the right of the last one produces a vertical rule at the edge of the table; these are redundant (and ignored) in boxed tables. To change the table format within a tbl region, use the .T& token at the start of a line. It is followed by a format specification and table data, but not region options. The quantity of columns in a new table format thus introduced cannot increase relative to the previous table format; in that case, you must end the table region and start another. If that will not serve because the region uses box options or the columns align in an undesirable manner, you must design the initial table format specification to include the maximum quantity of columns required, and use the S horizontal spanning classifier where necessary to achieve the desired columnar alignment. Attempting to horizontally span in the first column or vertically span on the first row is an error. Non-rectangular span areas are also not supported. Column modifiers Any number of modifiers can follow a column classifier. Arguments to modifiers, where accepted, are case-sensitive. If the same modifier is applied to a column specifier more than once, or if conflicting modifiers are applied, only the last occurrence has effect. The modifier x is mutually exclusive with e and w, but e is not mutually exclusive with w; if these are used in combination, x unsets both e and w, while either e or w overrides x. b, B Typeset entry in boldface, abbreviating f(B). d, D Align a vertically spanned table entry to the bottom (“down”), instead of the center, of its range. This is a GNU extension. e, E Equalize the widths of columns with this modifier. The column with the largest width controls. This modifier sets the default line length used in a text block. f, F Select the typeface for the table entry. This modifier must be followed by a font or style name (one or two characters not starting with a digit), font mounting position (a single digit), or a name or mounting position of any length in parentheses. The last form is a GNU extension. (The parameter corresponds to that accepted by the troff ft request.) A one-character argument not in parentheses must be separated by one or more spaces or tabs from what follows. i, I Typeset entry in an oblique or italic face, abbreviating f(I). m, M Call a groff macro before typesetting a text block (see subsection “Text blocks” below). This is a GNU extension. This modifier must be followed by a macro name of one or two characters or a name of any length in parentheses. A one-character macro name not in parentheses must be separated by one or more spaces or tabs from what follows. The named macro must be defined before the table region containing this column modifier is encountered. The macro should contain only simple groff requests to change text formatting, like adjustment or hyphenation. The macro is called after the column modifiers b, f, i, p, and v take effect; it can thus override other column modifiers. p, P Set the type size for the table entry. This modifier must be followed by an integer n with an optional leading sign. If unsigned, the type size is set to n scaled points. Otherwise, the type size is incremented or decremented per the sign by n scaled points. The use of a signed multi- digit number is a GNU extension. (The parameter corresponds to that accepted by the troff ps request.) If a type size modifier is followed by a column separation modifier (see below), they must be separated by at least one space or tab. t, T Align a vertically spanned table entry to the top, instead of the center, of its range. u, U Move the column up one half-line, “staggering” the rows. This is a GNU extension. v, V Set the vertical spacing to be used in a text block. This modifier must be followed by an integer n with an optional leading sign. If unsigned, the vertical spacing is set to n points. Otherwise, the vertical spacing is incremented or decremented per the sign by n points. The use of a signed multi-digit number is a GNU extension. (This parameter corresponds to that accepted by the troff vs request.) If a vertical spacing modifier is followed by a column separation modifier (see below), they must be separated by at least one space or tab. w, W Set the column's minimum width. This modifier must be followed by a number, which is either a unitless integer, or a roff horizontal measurement in parentheses. Parentheses are required if the width is to be followed immediately by an explicit column separation (alternatively, follow the width with one or more spaces or tabs). If no unit is specified, ens are assumed. This modifier sets the default line length used in a text block. x, X Expand the column. After computing the column widths, distribute any remaining line length evenly over all columns bearing this modifier. Applying the x modifier to more than one column is a GNU extension. This modifier sets the default line length used in a text block. z, Z Ignore the table entries corresponding to this column for width calculation purposes; that is, compute the column's width using only the information in its descriptor. n A numeric suffix on a column descriptor sets the separation distance (in ens) from the succeeding column; the default separation is 3n. This separation is proportionally multiplied if the expand region option is in effect; in the case of tables wider than the output line length, this separation might be zero. A negative separation cannot be specified. A separation amount after the last column in a row is nonsensical and provokes a diagnostic from tbl. Table data The table data come after the format specification. Each input line corresponds to a table row, except that a backslash at the end of a line of table data continues an entry on the next input line. (Text blocks, discussed below, also spread table entries across multiple input lines.) Table entries within a row are separated in the input by a tab character by default; see the tab region option above. Excess entries in a row of table data (those that have no corresponding column descriptor, not even an implicit one arising from rectangularization of the table) are discarded with a diagnostic message. roff control lines are accepted between rows of table data and within text blocks. If you wish to visibly mark an empty table entry in the document source, populate it with the \& roff dummy character. The table data are interrupted by a line consisting of the .T& input token, and conclude with the line .TE. Ordinarily, a table entry is typeset rigidly. It is not filled, broken, hyphenated, adjusted, or populated with additional inter- sentence space. tbl instructs the formatter to measure each table entry as it occurs in the input, updating the width required by its corresponding column. If the z modifier applies to the column, this measurement is ignored; if w applies and its argument is larger than this width, that argument is used instead. In contrast to conventional roff input (within a paragraph, say), changes to text formatting, such as font selection or vertical spacing, do not persist between entries. Several forms of table entry are interpreted specially. • If a table row contains only an underscore or equals sign (_ or =), a single or double horizontal rule (line), respectively, is drawn across the table at that point. • A table entry containing only _ or = on an otherwise populated row is replaced by a single or double horizontal rule, respectively, joining its neighbors. • Prefixing a lone underscore or equals sign with a backslash also has meaning. If a table entry consists only of \_ or \= on an otherwise populated row, it is replaced by a single or double horizontal rule, respectively, that does not (quite) join its neighbors. • A table entry consisting of \Rx, where x is any roff ordinary or special character, is replaced by enough repetitions of the glyph corresponding to x to fill the column, albeit without joining its neighbors. • On any row but the first, a table entry of \^ causes the entry above it to span down into the current one. On occasion, these special tokens may be required as literal table data. To use either _ or = literally and alone in an entry, prefix or suffix it with the roff dummy character \&. To express \_, \=, or \R, use a roff escape sequence to interpolate the backslash (\e or \[rs]). A reliable way to emplace the \^ glyph sequence within a table entry is to use a pair of groff special character escape sequences (\[rs]\[ha]). Rows of table entries can be interleaved with groff control lines; these do not count as table data. On such lines the default control character (.) must be used (and not changed); the no-break control character is not recognized. To start the first table entry in a row with a dot, precede it with the roff dummy character \&. Text blocks An ordinary table entry's contents can make a column, and therefore the table, excessively wide; the table then exceeds the line length of the page, and becomes ugly or is exposed to truncation by the output device. When a table entry requires more conventional typesetting, breaking across more than one output line (and thereby increasing the height of its row), it can be placed within a text block. tbl interprets a table entry beginning with “T{” at the end of an input line not as table data, but as a token starting a text block. Similarly, “T}” at the start of an input line ends a text block; it must also end the table entry. Text block tokens can share an input line with other table data (preceding T{ and following T}). Input lines between these tokens are formatted in a diversion by troff. Text blocks cannot be nested. Multiple text blocks can occur in a table row. Text blocks are formatted as was the text prior to the table, modified by applicable column descriptors. Specifically, the classifiers A, C, L, N, R, and S determine a text block's alignment within its cell, but not its adjustment. Add na or ad requests to the beginning of a text block to alter its adjustment distinctly from other text in the document. As with other table entries, when a text block ends, any alterations to formatting parameters are discarded. They do not affect subsequent table entries, not even other text blocks. If w or x modifiers are not specified for all columns of a text block's span, the default length of the text block (more precisely, the line length used to process the text block diversion) is computed as L×C/(N+1), where L is the current line length, C the number of columns spanned by the text block, and N the number of columns in the table. If necessary, you can also control a text block's width by including an ll (line length) request in it prior to any text to be formatted. Because a diversion is used to format the text block, its height and width are subsequently available in the registers dn and dl, respectively. roff interface The register TW stores the width of the table region in basic units; it can't be used within the region itself, but is defined before the .TE token is output so that a groff macro named TE can make use of it. T. is a Boolean-valued register indicating whether the bottom of the table is being processed. The #T register marks the top of the table. Avoid using these names for any other purpose. tbl also defines a macro T# to produce the bottom and side lines of a boxed table. While tbl itself arranges for the output to include a call of this macro at the end of such a table, it can also be used by macro packages to create boxes for multi-page tables by calling it from a page footer macro that is itself called by a trap planted near the bottom of the page. See section “Limitations” below for more on multi-page tables. GNU tbl internally employs register, string, macro, and diversion names beginning with the numeral 3. A document to be preprocessed with GNU tbl should not use any such identifiers. Interaction with eqn tbl should always be called before eqn(1). (groff(1) automatically arranges preprocessors in the correct order.) Don't call the EQ and EN macros within tables; instead, set up delimiters in your eqn input and use the delim region option so that tbl will recognize them. GNU tbl enhancements In addition to extensions noted above, GNU tbl removes constraints endured by users of AT&T tbl. • Region options can be specified in any lettercase. • There is no limit on the number of columns in a table, regardless of their classification, nor any limit on the number of text blocks. • All table rows are considered when deciding column widths, not just those occurring in the first 200 input lines of a region. Similarly, table continuation (.T&) tokens are recognized outside a region's first 200 input lines. • Numeric and alphabetic entries may appear in the same column. • Numeric and alphabetic entries may span horizontally. Using GNU tbl within macros You can embed a table region inside a macro definition. However, since tbl writes its own macro definitions at the beginning of each table region, it is necessary to call end macros instead of ending macro definitions with “..”. Additionally, the escape character must be disabled. Not all tbl features can be exercised from such macros because tbl is a roff preprocessor: it sees the input earlier than troff does. For example, vertically aligning decimal separators fails if the numbers containing them occur as macro or string parameters; the alignment is performed by tbl itself, which sees only \$1, \$2, and so on, and therefore can't recognize a decimal separator that only appears later when troff interpolates a macro or string definition. Using tbl macros within conditional input (that is, contingent upon an if, ie, el, or while request) can result in misleading line numbers in subsequent diagnostics. tbl unconditionally injects its output into the source document, but the conditional branch containing it may not be taken, and if it is not, the lf requests that tbl injects to restore the source line number cannot take effect. Consider copying the input line counter register c. and restoring its value at a convenient location after applicable arithmetic. --help displays a usage message, while -v and --version show version information; all exit afterward. -C Enable AT&T compatibility mode: recognize .TS and .TE even when followed by a character other than space or newline. Furthermore, interpret the uninterpreted leader escape sequence \a.
# tbl > Table preprocessor for the groff (GNU Troff) document formatting system. See > also `groff` and `troff`. More information: https://manned.org/tbl. * Process input with tables, saving the output for future typesetting with groff to PostScript: `tbl {{path/to/input_file}} > {{path/to/output.roff}}` * Typeset input with tables to PDF using the [me] macro package: `tbl -T {{pdf}} {{path/to/input.tbl}} | groff -{{me}} -T {{pdf}} > {{path/to/output.pdf}}`
fg
If job control is enabled (see the description of set -m), the fg utility shall move a background job from the current environment (see Section 2.12, Shell Execution Environment) into the foreground. Using fg to place a job into the foreground shall remove its process ID from the list of those ``known in the current shell execution environment''; see Section 2.9.3.1, Examples. None.
# fg > Run jobs in foreground. More information: https://manned.org/fg. * Bring most recently suspended or running background job to foreground: `fg` * Bring a specific job to foreground: `fg %{{job_id}}`
kill
The default signal for kill is TERM. Use -l or -L to list available signals. Particularly useful signals include HUP, INT, KILL, STOP, CONT, and 0. Alternate signals may be specified in three ways: -9, -SIGKILL or -KILL. Negative PID values may be used to choose whole process groups; see the PGID column in ps command output. A PID of -1 is special; it indicates all processes except the kill process itself and init. <pid> [...] Send signal to every <pid> listed. -<signal> -s <signal> --signal <signal> Specify the signal to be sent. The signal can be specified by using name or number. The behavior of signals is explained in signal(7) manual page. -q, --queue value Use sigqueue(3) rather than kill(2) and the value argument is used to specify an integer to be sent with the signal. If the receiving process has installed a handler for this signal using the SA_SIGINFO flag to sigaction(2), then it can obtain this data via the si_value field of the siginfo_t structure. -l, --list [signal] List signal names. This option has optional argument, which will convert signal number to signal name, or other way round. -L, --table List signal names in a nice table.
# kill > Sends a signal to a process, usually related to stopping the process. All > signals except for SIGKILL and SIGSTOP can be intercepted by the process to > perform a clean exit. More information: https://manned.org/kill. * Terminate a program using the default SIGTERM (terminate) signal: `kill {{process_id}}` * List available signal names (to be used without the `SIG` prefix): `kill -l` * Terminate a background job: `kill %{{job_id}}` * Terminate a program using the SIGHUP (hang up) signal. Many daemons will reload instead of terminating: `kill -{{1|HUP}} {{process_id}}` * Terminate a program using the SIGINT (interrupt) signal. This is typically initiated by the user pressing `Ctrl + C`: `kill -{{2|INT}} {{process_id}}` * Signal the operating system to immediately terminate a program (which gets no chance to capture the signal): `kill -{{9|KILL}} {{process_id}}` * Signal the operating system to pause a program until a SIGCONT ("continue") signal is received: `kill -{{17|STOP}} {{process_id}}` * Send a `SIGUSR1` signal to all processes with the given GID (group id): `kill -{{SIGUSR1}} -{{group_id}}`
git-credential
Git has an internal interface for storing and retrieving credentials from system-specific helpers, as well as prompting the user for usernames and passwords. The git-credential command exposes this interface to scripts which may want to retrieve, store, or prompt for credentials in the same manner as Git. The design of this scriptable interface models the internal C API; see credential.h for more background on the concepts. git-credential takes an "action" option on the command-line (one of fill, approve, or reject) and reads a credential description on stdin (see INPUT/OUTPUT FORMAT). If the action is fill, git-credential will attempt to add "username" and "password" attributes to the description by reading config files, by contacting any configured credential helpers, or by prompting the user. The username and password attributes of the credential description are then printed to stdout together with the attributes already provided. If the action is approve, git-credential will send the description to any configured credential helpers, which may store the credential for later use. If the action is reject, git-credential will send the description to any configured credential helpers, which may erase any stored credential matching the description. If the action is approve or reject, no output should be emitted.
# git credential > Retrieve and store user credentials. More information: https://git- > scm.com/docs/git-credential. * Display credential information, retrieving the username and password from configuration files: `echo "{{url=http://example.com}}" | git credential fill` * Send credential information to all configured credential helpers to store for later use: `echo "{{url=http://example.com}}" | git credential approve` * Erase the specified credential information from all the configured credential helpers: `echo "{{url=http://example.com}}" | git credential reject`
git-stripspace
Read text, such as commit messages, notes, tags and branch descriptions, from the standard input and clean it in the manner used by Git. With no arguments, this will: • remove trailing whitespace from all lines • collapse multiple consecutive empty lines into one empty line • remove empty lines from the beginning and end of the input • add a missing \n to the last line if necessary. In the case where the input consists entirely of whitespace characters, no output will be produced. NOTE: This is intended for cleaning metadata, prefer the --whitespace=fix mode of git-apply(1) for correcting whitespace of patches or files in the repository. -s, --strip-comments Skip and remove all lines starting with comment character (default #). -c, --comment-lines Prepend comment character and blank to each line. Lines will automatically be terminated with a newline. On empty lines, only the comment character will be prepended.
# git stripspace > Read text (e.g. commit messages, notes, tags, and branch descriptions) from > `stdin` and clean it into the manner used by Git. More information: > https://git-scm.com/docs/git-stripspace. * Trim whitespace from a file: `cat {{path/to/file}} | git stripspace` * Trim whitespace and Git comments from a file: `cat {{path/to/file}} | git stripspace --strip-comments` * Convert all lines in a file into Git comments: `git stripspace --comment-lines < {{path/to/file}}`
hostname
Print or set the hostname of the current system. --help display this help and exit --version output version information and exit
# hostname > Show or set the system's host name. More information: > https://manned.org/hostname. * Show current host name: `hostname` * Show the network address of the host name: `hostname -i` * Show all network addresses of the host: `hostname -I` * Show the FQDN (Fully Qualified Domain Name): `hostname --fqdn` * Set current host name: `hostname {{new_hostname}}`
fuser
fuser displays the PIDs of processes using the specified files or file systems. In the default display mode, each file name is followed by a letter denoting the type of access: c current directory. e executable being run. f open file. f is omitted in default display mode. F open file for writing. F is omitted in default display mode. r root directory. m mmap'ed file or shared library. . Placeholder, omitted in default display mode. fuser returns a non-zero return code if none of the specified files is accessed or in case of a fatal error. If at least one access has been found, fuser returns zero. In order to look up processes using TCP and UDP sockets, the corresponding name space has to be selected with the -n option. By default fuser will look in both IPv6 and IPv4 sockets. To change the default behavior, use the -4 and -6 options. The socket(s) can be specified by the local and remote port, and the remote address. All fields are optional, but commas in front of missing fields must be present: [lcl_port][,[rmt_host][,[rmt_port]]] Either symbolic or numeric values can be used for IP addresses and port numbers. fuser outputs only the PIDs to stdout, everything else is sent to stderr. -a, --all Show all files specified on the command line. By default, only files that are accessed by at least one process are shown. -c Same as -m option, used for POSIX compatibility. -f Silently ignored, used for POSIX compatibility. -k, --kill Kill processes accessing the file. Unless changed with -SIGNAL, SIGKILL is sent. An fuser process never kills itself, but may kill other fuser processes. The effective user ID of the process executing fuser is set to its real user ID before attempting to kill. -i, --interactive Ask the user for confirmation before killing a process. This option is silently ignored if -k is not present too. -I, --inode For the name space file let all comparisons be based on the inodes of the specified file(s) and never on the file names even on network based file systems. -l, --list-signals List all known signal names. -m NAME, --mount NAME NAME specifies a file on a mounted file system or a block device that is mounted. All processes accessing files on that file system are listed. If a directory is specified, it is automatically changed to NAME/ to use any file system that might be mounted on that directory. -M, --ismountpoint Request will be fulfilled only if NAME specifies a mountpoint. This is an invaluable seat belt which prevents you from killing the machine if NAME happens to not be a filesystem. -w Kill only processes which have write access. This option is silently ignored if -k is not present too. -n NAMESPACE, --namespace NAMESPACE Select a different name space. The name spaces file (file names, the default), udp (local UDP ports), and tcp (local TCP ports) are supported. For ports, either the port number or the symbolic name can be specified. If there is no ambiguity, the shortcut notation name/space (e.g., 80/tcp) can be used. -s, --silent Silent operation. -u and -v are ignored in this mode. -a must not be used with -s. -SIGNAL Use the specified signal instead of SIGKILL when killing processes. Signals can be specified either by name (e.g., -HUP) or by number (e.g., -1). This option is silently ignored if the -k option is not used. -u, --user Append the user name of the process owner to each PID. -v, --verbose Verbose mode. Processes are shown in a ps-like style. The fields PID, USER and COMMAND are similar to ps. ACCESS shows how the process accesses the file. Verbose mode will also show when a particular file is being accessed as a mount point, knfs export or swap file. In this case kernel is shown instead of the PID. -V, --version Display version information. -4, --ipv4 Search only for IPv4 sockets. This option must not be used with the -6 option and only has an effect with the tcp and udp namespaces. -6, --ipv6 Search only for IPv6 sockets. This option must not be used with the -4 option and only has an effect with the tcp and udp namespaces.
# fuser > Display process IDs currently using files or sockets. More information: > https://manned.org/fuser. * Find which processes are accessing a file or directory: `fuser {{path/to/file_or_directory}}` * Show more fields (`USER`, `PID`, `ACCESS` and `COMMAND`): `fuser --verbose {{path/to/file_or_directory}}` * Identify processes using a TCP socket: `fuser --namespace tcp {{port}}` * Kill all processes accessing a file or directory (sends the `SIGKILL` signal): `fuser --kill {{path/to/file_or_directory}}` * Find which processes are accessing the filesystem containing a specific file or directory: `fuser --mount {{path/to/file_or_directory}}` * Kill all processes with a TCP connection on a specific port: `fuser --kill {{port}}/tcp`
git-mergetool
Use git mergetool to run one of several merge utilities to resolve merge conflicts. It is typically run after git merge. If one or more <file> parameters are given, the merge tool program will be run to resolve differences on each file (skipping those without conflicts). Specifying a directory will include all unresolved files in that path. If no <file> names are specified, git mergetool will run the merge tool program on every file with merge conflicts. -t <tool>, --tool=<tool> Use the merge resolution program specified by <tool>. Valid values include emerge, gvimdiff, kdiff3, meld, vimdiff, and tortoisemerge. Run git mergetool --tool-help for the list of valid <tool> settings. If a merge resolution program is not specified, git mergetool will use the configuration variable merge.tool. If the configuration variable merge.tool is not set, git mergetool will pick a suitable default. You can explicitly provide a full path to the tool by setting the configuration variable mergetool.<tool>.path. For example, you can configure the absolute path to kdiff3 by setting mergetool.kdiff3.path. Otherwise, git mergetool assumes the tool is available in PATH. Instead of running one of the known merge tool programs, git mergetool can be customized to run an alternative program by specifying the command line to invoke in a configuration variable mergetool.<tool>.cmd. When git mergetool is invoked with this tool (either through the -t or --tool option or the merge.tool configuration variable) the configured command line will be invoked with $BASE set to the name of a temporary file containing the common base for the merge, if available; $LOCAL set to the name of a temporary file containing the contents of the file on the current branch; $REMOTE set to the name of a temporary file containing the contents of the file to be merged, and $MERGED set to the name of the file to which the merge tool should write the result of the merge resolution. If the custom merge tool correctly indicates the success of a merge resolution with its exit code, then the configuration variable mergetool.<tool>.trustExitCode can be set to true. Otherwise, git mergetool will prompt the user to indicate the success of the resolution after the custom tool has exited. --tool-help Print a list of merge tools that may be used with --tool. -y, --no-prompt Don’t prompt before each invocation of the merge resolution program. This is the default if the merge resolution program is explicitly specified with the --tool option or with the merge.tool configuration variable. --prompt Prompt before each invocation of the merge resolution program to give the user a chance to skip the path. -g, --gui When git-mergetool is invoked with the -g or --gui option the default merge tool will be read from the configured merge.guitool variable instead of merge.tool. If merge.guitool is not set, we will fallback to the tool configured under merge.tool. This may be autoselected using the configuration variable mergetool.guiDefault. --no-gui This overrides a previous -g or --gui setting or mergetool.guiDefault configuration and reads the default merge tool from the configured merge.tool variable. -O<orderfile> Process files in the order specified in the <orderfile>, which has one shell glob pattern per line. This overrides the diff.orderFile configuration variable (see git-config(1)). To cancel diff.orderFile, use -O/dev/null.
# git mergetool > Run merge conflict resolution tools to resolve merge conflicts. More > information: https://git-scm.com/docs/git-mergetool. * Launch the default merge tool to resolve conflicts: `git mergetool` * List valid merge tools: `git mergetool --tool-help` * Launch the merge tool identified by a name: `git mergetool --tool {{tool_name}}` * Don't prompt before each invocation of the merge tool: `git mergetool --no-prompt` * Explicitly use the GUI merge tool (see the `merge.guitool` config variable): `git mergetool --gui` * Explicitly use the regular merge tool (see the `merge.tool` config variable): `git mergetool --no-gui`
su
su allows commands to be run with a substitute user and group ID. When called with no user specified, su defaults to running an interactive shell as root. When user is specified, additional arguments can be supplied, in which case they are passed to the shell. For backward compatibility, su defaults to not change the current directory and to only set the environment variables HOME and SHELL (plus USER and LOGNAME if the target user is not root). It is recommended to always use the --login option (instead of its shortcut -) to avoid side effects caused by mixing environments. This version of su uses PAM for authentication, account and session management. Some configuration options found in other su implementations, such as support for a wheel group, have to be configured via PAM. su is mostly designed for unprivileged users, the recommended solution for privileged users (e.g., scripts executed by root) is to use non-set-user-ID command runuser(1) that does not require authentication and provides separate PAM configuration. If the PAM session is not required at all then the recommended solution is to use command setpriv(1). Note that su in all cases uses PAM (pam_getenvlist(3)) to do the final environment modification. Command-line options such as --login and --preserve-environment affect the environment before it is modified by PAM. Since version 2.38 su resets process resource limits RLIMIT_NICE, RLIMIT_RTPRIO, RLIMIT_FSIZE, RLIMIT_AS and RLIMIT_NOFILE. -c, --command=command Pass command to the shell with the -c option. -f, --fast Pass -f to the shell, which may or may not be useful, depending on the shell. -g, --group=group Specify the primary group. This option is available to the root user only. -G, --supp-group=group Specify a supplementary group. This option is available to the root user only. The first specified supplementary group is also used as a primary group if the option --group is not specified. -, -l, --login Start the shell as a login shell with an environment similar to a real login: • clears all the environment variables except TERM and variables specified by --whitelist-environment • initializes the environment variables HOME, SHELL, USER, LOGNAME, and PATH • changes to the target user’s home directory • sets argv[0] of the shell to '-' in order to make the shell a login shell -m, -p, --preserve-environment Preserve the entire environment, i.e., do not set HOME, SHELL, USER or LOGNAME. This option is ignored if the option --login is specified. -P, --pty Create a pseudo-terminal for the session. The independent terminal provides better security as the user does not share a terminal with the original session. This can be used to avoid TIOCSTI ioctl terminal injection and other security attacks against terminal file descriptors. The entire session can also be moved to the background (e.g., su --pty - username -c application &). If the pseudo-terminal is enabled, then su works as a proxy between the sessions (sync stdin and stdout). This feature is mostly designed for interactive sessions. If the standard input is not a terminal, but for example a pipe (e.g., echo "date" | su --pty), then the ECHO flag for the pseudo-terminal is disabled to avoid messy output. -s, --shell=shell Run the specified shell instead of the default. The shell to run is selected according to the following rules, in order: • the shell specified with --shell • the shell specified in the environment variable SHELL, if the --preserve-environment option is used • the shell listed in the passwd entry of the target user • /bin/sh If the target user has a restricted shell (i.e., not listed in /etc/shells), the --shell option and the SHELL environment variables are ignored unless the calling user is root. --session-command=command Same as -c, but do not create a new session. (Discouraged.) -w, --whitelist-environment=list Don’t reset the environment variables specified in the comma-separated list when clearing the environment for --login. The whitelist is ignored for the environment variables HOME, SHELL, USER, LOGNAME, and PATH. -h, --help Display help text and exit. -V, --version Print version and exit.
# su > Switch shell to another user. More information: https://manned.org/su. * Switch to superuser (requires the root password): `su` * Switch to a given user (requires the user's password): `su {{username}}` * Switch to a given user and simulate a full login shell: `su - {{username}}` * Execute a command as another user: `su - {{username}} -c "{{command}}"`
git-request-pull
Generate a request asking your upstream project to pull changes into their tree. The request, printed to the standard output, begins with the branch description, summarizes the changes and indicates from where they can be pulled. The upstream project is expected to have the commit named by <start> and the output asks it to integrate the changes you made since that commit, up to the commit named by <end>, by visiting the repository named by <URL>. -p Include patch text in the output. <start> Commit to start at. This names a commit that is already in the upstream history. <URL> The repository URL to be pulled from. <end> Commit to end at (defaults to HEAD). This names the commit at the tip of the history you are asking to be pulled. When the repository named by <URL> has the commit at a tip of a ref that is different from the ref you have locally, you can use the <local>:<remote> syntax, to have its local name, a colon :, and its remote name.
# git request-pull > Generate a request asking the upstream project to pull changes into its > tree. More information: https://git-scm.com/docs/git-request-pull. * Produce a request summarizing the changes between the v1.1 release and a specified branch: `git request-pull {{v1.1}} {{https://example.com/project}} {{branch_name}}` * Produce a request summarizing the changes between the v0.1 release on the `foo` branch and the local `bar` branch: `git request-pull {{v0.1}} {{https://example.com/project}} {{foo:bar}}`
perf
Performance counters for Linux are a new kernel-based subsystem that provide a framework for all things performance analysis. It covers hardware level (CPU/PMU, Performance Monitoring Unit) features and software features (software counters, tracepoints) as well. -h, --help Run perf help command. -v, --version Display perf version. -vv Print the compiled-in status of libraries. --exec-path Display or set exec path. --html-path Display html documentation path. -p, --paginate Set up pager. --no-pager Do not set pager. --buildid-dir Setup buildid cache directory. It has higher priority than buildid.dir config file option. --list-cmds List the most commonly used perf commands. --list-opts List available perf options. --debugfs-dir Set debugfs directory or set environment variable PERF_DEBUGFS_DIR. --debug Setup debug variable (see list below) in value range (0, 10). Use like: --debug verbose # sets verbose = 1 --debug verbose=2 # sets verbose = 2 List of debug variables allowed to set: verbose - general debug messages ordered-events - ordered events object debug messages data-convert - data convert command debug messages stderr - write debug output (option -v) to stderr in browser mode perf-event-open - Print perf_event_open() arguments and return value
# perf > Framework for Linux performance counter measurements. More information: > https://perf.wiki.kernel.org. * Display basic performance counter stats for a command: `perf stat {{gcc hello.c}}` * Display system-wide real-time performance counter profile: `sudo perf top` * Run a command and record its profile into `perf.data`: `sudo perf record {{command}}` * Record the profile of an existing process into `perf.data`: `sudo perf record -p {{pid}}` * Read `perf.data` (created by `perf record`) and display the profile: `sudo perf report`
chrt
chrt sets or retrieves the real-time scheduling attributes of an existing PID, or runs command with the given attributes. -a, --all-tasks Set or retrieve the scheduling attributes of all the tasks (threads) for a given PID. -m, --max Show minimum and maximum valid priorities, then exit. -p, --pid Operate on an existing PID and do not launch a new task. -v, --verbose Show status information. -h, --help Display help text and exit. -V, --version Print version and exit.
# chrt > Manipulate the real-time attributes of a process. More information: > https://man7.org/linux/man-pages/man1/chrt.1.html. * Display attributes of a process: `chrt --pid {{PID}}` * Display attributes of all threads of a process: `chrt --all-tasks --pid {{PID}}` * Display the min/max priority values that can be used with `chrt`: `chrt --max` * Set the scheduling policy for a process: `chrt --pid {{PID}} --{{deadline|idle|batch|rr|fifo|other}}`
git-describe
The command finds the most recent tag that is reachable from a commit. If the tag points to the commit, then only the tag is shown. Otherwise, it suffixes the tag name with the number of additional commits on top of the tagged object and the abbreviated object name of the most recent commit. The result is a "human-readable" object name which can also be used to identify the commit to other git commands. By default (without --all or --tags) git describe only shows annotated tags. For more information about creating annotated tags see the -a and -s options to git-tag(1). If the given object refers to a blob, it will be described as <commit-ish>:<path>, such that the blob can be found at <path> in the <commit-ish>, which itself describes the first commit in which this blob occurs in a reverse revision walk from HEAD. <commit-ish>... Commit-ish object names to describe. Defaults to HEAD if omitted. --dirty[=<mark>], --broken[=<mark>] Describe the state of the working tree. When the working tree matches HEAD, the output is the same as "git describe HEAD". If the working tree has local modification "-dirty" is appended to it. If a repository is corrupt and Git cannot determine if there is local modification, Git will error out, unless ‘--broken’ is given, which appends the suffix "-broken" instead. --all Instead of using only the annotated tags, use any ref found in refs/ namespace. This option enables matching any known branch, remote-tracking branch, or lightweight tag. --tags Instead of using only the annotated tags, use any tag found in refs/tags namespace. This option enables matching a lightweight (non-annotated) tag. --contains Instead of finding the tag that predates the commit, find the tag that comes after the commit, and thus contains it. Automatically implies --tags. --abbrev=<n> Instead of using the default number of hexadecimal digits (which will vary according to the number of objects in the repository with a default of 7) of the abbreviated object name, use <n> digits, or as many digits as needed to form a unique object name. An <n> of 0 will suppress long format, only showing the closest tag. --candidates=<n> Instead of considering only the 10 most recent tags as candidates to describe the input commit-ish consider up to <n> candidates. Increasing <n> above 10 will take slightly longer but may produce a more accurate result. An <n> of 0 will cause only exact matches to be output. --exact-match Only output exact matches (a tag directly references the supplied commit). This is a synonym for --candidates=0. --debug Verbosely display information about the searching strategy being employed to standard error. The tag name will still be printed to standard out. --long Always output the long format (the tag, the number of commits and the abbreviated commit name) even when it matches a tag. This is useful when you want to see parts of the commit object name in "describe" output, even when the commit in question happens to be a tagged version. Instead of just emitting the tag name, it will describe such a commit as v1.2-0-gdeadbee (0th commit since tag v1.2 that points at object deadbee....). --match <pattern> Only consider tags matching the given glob(7) pattern, excluding the "refs/tags/" prefix. If used with --all, it also considers local branches and remote-tracking references matching the pattern, excluding respectively "refs/heads/" and "refs/remotes/" prefix; references of other types are never considered. If given multiple times, a list of patterns will be accumulated, and tags matching any of the patterns will be considered. Use --no-match to clear and reset the list of patterns. --exclude <pattern> Do not consider tags matching the given glob(7) pattern, excluding the "refs/tags/" prefix. If used with --all, it also does not consider local branches and remote-tracking references matching the pattern, excluding respectively "refs/heads/" and "refs/remotes/" prefix; references of other types are never considered. If given multiple times, a list of patterns will be accumulated and tags matching any of the patterns will be excluded. When combined with --match a tag will be considered when it matches at least one --match pattern and does not match any of the --exclude patterns. Use --no-exclude to clear and reset the list of patterns. --always Show uniquely abbreviated commit object as fallback. --first-parent Follow only the first parent commit upon seeing a merge commit. This is useful when you wish to not match tags on branches merged in the history of the target commit.
# git describe > Give an object a human-readable name based on an available ref. More > information: https://git-scm.com/docs/git-describe. * Create a unique name for the current commit (the name contains the most recent annotated tag, the number of additional commits, and the abbreviated commit hash): `git describe` * Create a name with 4 digits for the abbreviated commit hash: `git describe --abbrev={{4}}` * Generate a name with the tag reference path: `git describe --all` * Describe a Git tag: `git describe {{v1.0.0}}` * Create a name for the last commit of a given branch: `git describe {{branch_name}}`
tail
The tail utility shall copy its input file to the standard output beginning at a designated place. Copying shall begin at the point in the file indicated by the -c number or -n number options. The option-argument number shall be counted in units of lines or bytes, according to the options -n and -c. Both line and byte counts start from 1. Tails relative to the end of the file may be saved in an internal buffer, and thus may be limited in length. Such a buffer, if any, shall be no smaller than {LINE_MAX}*10 bytes. The tail utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except that '+' may be recognized as an option delimiter as well as '-'. The following options shall be supported: -c number The application shall ensure that the number option- argument is a decimal integer, optionally including a sign. The sign shall affect the location in the file, measured in bytes, to begin the copying: ┌─────┬────────────────────────────────────────┐ │Sign │ Copying Starts │ ├─────┼────────────────────────────────────────┤ │ + │ Relative to the beginning of the file. │ │ - │ Relative to the end of the file. │ │none │ Relative to the end of the file. │ └─────┴────────────────────────────────────────┘ The application shall ensure that if the sign of the number option-argument is '+', the number option- argument is a non-zero decimal integer. The origin for counting shall be 1; that is, -c +1 represents the first byte of the file, -c -1 the last. -f If the input file is a regular file or if the file operand specifies a FIFO, do not terminate after the last line of the input file has been copied, but read and copy further bytes from the input file when they become available. If no file operand is specified and standard input is a pipe or FIFO, the -f option shall be ignored. If the input file is not a FIFO, pipe, or regular file, it is unspecified whether or not the -f option shall be ignored. -n number This option shall be equivalent to -c number, except the starting location in the file shall be measured in lines instead of bytes. The origin for counting shall be 1; that is, -n +1 represents the first line of the file, -n -1 the last. If neither -c nor -n is specified, -n 10 shall be assumed.
# tail > Display the last part of a file. See also: `head`. More information: > https://manned.org/man/freebsd-13.0/tail.1. * Show last 'count' lines in file: `tail -n {{8}} {{path/to/file}}` * Print a file from a specific line number: `tail -n +{{8}} {{path/to/file}}` * Print a specific count of bytes from the end of a given file: `tail -c {{8}} {{path/to/file}}` * Print the last lines of a given file and keep reading file until `Ctrl + C`: `tail -f {{path/to/file}}` * Keep reading file until `Ctrl + C`, even if the file is inaccessible: `tail -F {{path/to/file}}` * Show last 'count' lines in 'file' and refresh every 'seconds' seconds: `tail -n {{8}} -s {{10}} -f {{path/to/file}}`
truncate
Shrink or extend the size of each FILE to the specified size A FILE argument that does not exist is created. If a FILE is larger than the specified size, the extra data is lost. If a FILE is shorter, it is extended and the sparse extended part (hole) reads as zero bytes. Mandatory arguments to long options are mandatory for short options too. -c, --no-create do not create any files -o, --io-blocks treat SIZE as number of IO blocks instead of bytes -r, --reference=RFILE base size on RFILE -s, --size=SIZE set or adjust the file size by SIZE bytes --help display this help and exit --version output version information and exit The SIZE argument is an integer and optional unit (example: 10K is 10*1024). Units are K,M,G,T,P,E,Z,Y,R,Q (powers of 1024) or KB,MB,... (powers of 1000). Binary prefixes can be used, too: KiB=K, MiB=M, and so on. SIZE may also be prefixed by one of the following modifying characters: '+' extend by, '-' reduce by, '<' at most, '>' at least, '/' round down to multiple of, '%' round up to multiple of.
# truncate > Shrink or extend the size of a file to the specified size. More information: > https://www.gnu.org/software/coreutils/truncate. * Set a size of 10 GB to an existing file, or create a new file with the specified size: `truncate --size {{10G}} {{filename}}` * Extend the file size by 50 MiB, fill with holes (which reads as zero bytes): `truncate --size +{{50M}} {{filename}}` * Shrink the file by 2 GiB, by removing data from the end of file: `truncate --size -{{2G}} {{filename}}` * Empty the file's content: `truncate --size 0 {{filename}}` * Empty the file's content, but do not create the file if it does not exist: `truncate --no-create --size 0 {{filename}}`
git-check-attr
For every pathname, this command will list if each attribute is unspecified, set, or unset as a gitattribute on that pathname. -a, --all List all attributes that are associated with the specified paths. If this option is used, then unspecified attributes will not be included in the output. --cached Consider .gitattributes in the index only, ignoring the working tree. --stdin Read pathnames from the standard input, one per line, instead of from the command-line. -z The output format is modified to be machine-parsable. If --stdin is also given, input paths are separated with a NUL character instead of a linefeed character. --source=<tree-ish> Check attributes against the specified tree-ish. It is common to specify the source tree by naming a commit, branch or tag associated with it. -- Interpret all preceding arguments as attributes and all following arguments as path names. If none of --stdin, --all, or -- is used, the first argument will be treated as an attribute and the rest of the arguments as pathnames.
# git check-attr > For every pathname, list if each attribute is unspecified, set, or unset as > a gitattribute on that pathname. More information: https://git- > scm.com/docs/git-check-attr. * Check the values of all attributes on a file: `git check-attr --all {{path/to/file}}` * Check the value of a specific attribute on a file: `git check-attr {{attribute}} {{path/to/file}}` * Check the value of a specific attribute on files: `git check-attr --all {{path/to/file1}} {{path/to/file2}}` * Check the value of a specific attribute on one or more files: `git check-attr {{attribute}} {{path/to/file1}} {{path/to/file2}}`
tr
The tr utility shall copy the standard input to the standard output with substitution or deletion of selected characters. The options specified and the string1 and string2 operands shall control translations that occur while copying characters and single-character collating elements. The tr utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -c Complement the set of values specified by string1. See the EXTENDED DESCRIPTION section. -C Complement the set of characters specified by string1. See the EXTENDED DESCRIPTION section. -d Delete all occurrences of input characters that are specified by string1. -s Replace instances of repeated characters with a single character, as described in the EXTENDED DESCRIPTION section.
# tr > Translate characters: run replacements based on single characters and > character sets. More information: https://www.gnu.org/software/coreutils/tr. * Replace all occurrences of a character in a file, and print the result: `tr {{find_character}} {{replace_character}} < {{path/to/file}}` * Replace all occurrences of a character from another command's output: `echo {{text}} | tr {{find_character}} {{replace_character}}` * Map each character of the first set to the corresponding character of the second set: `tr '{{abcd}}' '{{jkmn}}' < {{path/to/file}}` * Delete all occurrences of the specified set of characters from the input: `tr -d '{{input_characters}}' < {{path/to/file}}` * Compress a series of identical characters to a single character: `tr -s '{{input_characters}}' < {{path/to/file}}` * Translate the contents of a file to upper-case: `tr "[:lower:]" "[:upper:]" < {{path/to/file}}` * Strip out non-printable characters from a file: `tr -cd "[:print:]" < {{path/to/file}}`
cp
Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY. Mandatory arguments to long options are mandatory for short options too. -a, --archive same as -dR --preserve=all --attributes-only don't copy the file data, just the attributes --backup[=CONTROL] make a backup of each existing destination file -b like --backup but does not accept an argument --copy-contents copy contents of special files when recursive -d same as --no-dereference --preserve=links --debug explain how a file is copied. Implies -v -f, --force if an existing destination file cannot be opened, remove it and try again (this option is ignored when the -n option is also used) -i, --interactive prompt before overwrite (overrides a previous -n option) -H follow command-line symbolic links in SOURCE -l, --link hard link files instead of copying -L, --dereference always follow symbolic links in SOURCE -n, --no-clobber do not overwrite an existing file (overrides a -u or previous -i option). See also --update -P, --no-dereference never follow symbolic links in SOURCE -p same as --preserve=mode,ownership,timestamps --preserve[=ATTR_LIST] preserve the specified attributes --no-preserve=ATTR_LIST don't preserve the specified attributes --parents use full source file name under DIRECTORY -R, -r, --recursive copy directories recursively --reflink[=WHEN] control clone/CoW copies. See below --remove-destination remove each existing destination file before attempting to open it (contrast with --force) --sparse=WHEN control creation of sparse files. See below --strip-trailing-slashes remove any trailing slashes from each SOURCE argument -s, --symbolic-link make symbolic links instead of copying -S, --suffix=SUFFIX override the usual backup suffix -t, --target-directory=DIRECTORY copy all SOURCE arguments into DIRECTORY -T, --no-target-directory treat DEST as a normal file --update[=UPDATE] control which existing files are updated; UPDATE={all,none,older(default)}. See below -u equivalent to --update[=older] -v, --verbose explain what is being done -x, --one-file-system stay on this file system -Z set SELinux security context of destination file to default type --context[=CTX] like -Z, or if CTX is specified then set the SELinux or SMACK security context to CTX --help display this help and exit --version output version information and exit ATTR_LIST is a comma-separated list of attributes. Attributes are 'mode' for permissions (including any ACL and xattr permissions), 'ownership' for user and group, 'timestamps' for file timestamps, 'links' for hard links, 'context' for security context, 'xattr' for extended attributes, and 'all' for all attributes. By default, sparse SOURCE files are detected by a crude heuristic and the corresponding DEST file is made sparse as well. That is the behavior selected by --sparse=auto. Specify --sparse=always to create a sparse DEST file whenever the SOURCE file contains a long enough sequence of zero bytes. Use --sparse=never to inhibit creation of sparse files. UPDATE controls which existing files in the destination are replaced. 'all' is the default operation when an --update option is not specified, and results in all existing files in the destination being replaced. 'none' is similar to the --no-clobber option, in that no files in the destination are replaced, but also skipped files do not induce a failure. 'older' is the default operation when --update is specified, and results in files being replaced if they're older than the corresponding source file. When --reflink[=always] is specified, perform a lightweight copy, where the data blocks are copied only when modified. If this is not possible the copy fails, or if --reflink=auto is specified, fall back to a standard copy. Use --reflink=never to ensure a standard copy is performed. The backup suffix is '~', unless set with --suffix or SIMPLE_BACKUP_SUFFIX. The version control method may be selected via the --backup option or through the VERSION_CONTROL environment variable. Here are the values: none, off never make backups (even if --backup is given) numbered, t make numbered backups existing, nil numbered if numbered backups exist, simple otherwise simple, never always make simple backups As a special case, cp makes a backup of SOURCE when the force and backup options are given and SOURCE and DEST are the same name for an existing, regular file.
# cp > Copy files and directories. More information: > https://www.gnu.org/software/coreutils/cp. * Copy a file to another location: `cp {{path/to/source_file.ext}} {{path/to/target_file.ext}}` * Copy a file into another directory, keeping the filename: `cp {{path/to/source_file.ext}} {{path/to/target_parent_directory}}` * Recursively copy a directory's contents to another location (if the destination exists, the directory is copied inside it): `cp -R {{path/to/source_directory}} {{path/to/target_directory}}` * Copy a directory recursively, in verbose mode (shows files as they are copied): `cp -vR {{path/to/source_directory}} {{path/to/target_directory}}` * Copy multiple files at once to a directory: `cp -t {{path/to/destination_directory}} {{path/to/file1 path/to/file2 ...}}` * Copy text files to another location, in interactive mode (prompts user before overwriting): `cp -i {{*.txt}} {{path/to/target_directory}}` * Follow symbolic links before copying: `cp -L {{link}} {{path/to/target_directory}}` * Use the first argument as the destination directory (useful for `xargs ... | cp -t <DEST_DIR>`): `cp -t {{path/to/target_directory}} {{path/to/file_or_directory1 path/to/file_or_directory2 ...}}`
git-push
Updates remote refs using local refs, while sending objects necessary to complete the given refs. You can make interesting things happen to a repository every time you push into it, by setting up hooks there. See documentation for git-receive-pack(1). When the command line does not specify where to push with the <repository> argument, branch.*.remote configuration for the current branch is consulted to determine where to push. If the configuration is missing, it defaults to origin. When the command line does not specify what to push with <refspec>... arguments or --all, --mirror, --tags options, the command finds the default <refspec> by consulting remote.*.push configuration, and if it is not found, honors push.default configuration to decide what to push (See git-config(1) for the meaning of push.default). When neither the command-line nor the configuration specify what to push, the default behavior is used, which corresponds to the simple value for push.default: the current branch is pushed to the corresponding upstream branch, but as a safety measure, the push is aborted if the upstream branch does not have the same name as the local one. <repository> The "remote" repository that is destination of a push operation. This parameter can be either a URL (see the section GIT URLS below) or the name of a remote (see the section REMOTES below). <refspec>... Specify what destination ref to update with what source object. The format of a <refspec> parameter is an optional plus +, followed by the source object <src>, followed by a colon :, followed by the destination ref <dst>. The <src> is often the name of the branch you would want to push, but it can be any arbitrary "SHA-1 expression", such as master~4 or HEAD (see gitrevisions(7)). The <dst> tells which ref on the remote side is updated with this push. Arbitrary expressions cannot be used here, an actual ref must be named. If git push [<repository>] without any <refspec> argument is set to update some ref at the destination with <src> with remote.<repository>.push configuration variable, :<dst> part can be omitted—such a push will update a ref that <src> normally updates without any <refspec> on the command line. Otherwise, missing :<dst> means to update the same ref as the <src>. If <dst> doesn’t start with refs/ (e.g. refs/heads/master) we will try to infer where in refs/* on the destination <repository> it belongs based on the type of <src> being pushed and whether <dst> is ambiguous. • If <dst> unambiguously refers to a ref on the <repository> remote, then push to that ref. • If <src> resolves to a ref starting with refs/heads/ or refs/tags/, then prepend that to <dst>. • Other ambiguity resolutions might be added in the future, but for now any other cases will error out with an error indicating what we tried, and depending on the advice.pushUnqualifiedRefname configuration (see git-config(1)) suggest what refs/ namespace you may have wanted to push to. The object referenced by <src> is used to update the <dst> reference on the remote side. Whether this is allowed depends on where in refs/* the <dst> reference lives as described in detail below, in those sections "update" means any modifications except deletes, which as noted after the next few sections are treated differently. The refs/heads/* namespace will only accept commit objects, and updates only if they can be fast-forwarded. The refs/tags/* namespace will accept any kind of object (as commits, trees and blobs can be tagged), and any updates to them will be rejected. It’s possible to push any type of object to any namespace outside of refs/{tags,heads}/*. In the case of tags and commits, these will be treated as if they were the commits inside refs/heads/* for the purposes of whether the update is allowed. I.e. a fast-forward of commits and tags outside refs/{tags,heads}/* is allowed, even in cases where what’s being fast-forwarded is not a commit, but a tag object which happens to point to a new commit which is a fast-forward of the commit the last tag (or commit) it’s replacing. Replacing a tag with an entirely different tag is also allowed, if it points to the same commit, as well as pushing a peeled tag, i.e. pushing the commit that existing tag object points to, or a new tag object which an existing commit points to. Tree and blob objects outside of refs/{tags,heads}/* will be treated the same way as if they were inside refs/tags/*, any update of them will be rejected. All of the rules described above about what’s not allowed as an update can be overridden by adding an the optional leading + to a refspec (or using --force command line option). The only exception to this is that no amount of forcing will make the refs/heads/* namespace accept a non-commit object. Hooks and configuration can also override or amend these rules, see e.g. receive.denyNonFastForwards in git-config(1) and pre-receive and update in githooks(5). Pushing an empty <src> allows you to delete the <dst> ref from the remote repository. Deletions are always accepted without a leading + in the refspec (or --force), except when forbidden by configuration or hooks. See receive.denyDeletes in git-config(1) and pre-receive and update in githooks(5). The special refspec : (or +: to allow non-fast-forward updates) directs Git to push "matching" branches: for every branch that exists on the local side, the remote side is updated if a branch of the same name already exists on the remote side. tag <tag> means the same as refs/tags/<tag>:refs/tags/<tag>. --all, --branches Push all branches (i.e. refs under refs/heads/); cannot be used with other <refspec>. --prune Remove remote branches that don’t have a local counterpart. For example a remote branch tmp will be removed if a local branch with the same name doesn’t exist any more. This also respects refspecs, e.g. git push --prune remote refs/heads/*:refs/tmp/* would make sure that remote refs/tmp/foo will be removed if refs/heads/foo doesn’t exist. --mirror Instead of naming each ref to push, specifies that all refs under refs/ (which includes but is not limited to refs/heads/, refs/remotes/, and refs/tags/) be mirrored to the remote repository. Newly created local refs will be pushed to the remote end, locally updated refs will be force updated on the remote end, and deleted refs will be removed from the remote end. This is the default if the configuration option remote.<remote>.mirror is set. -n, --dry-run Do everything except actually send the updates. --porcelain Produce machine-readable output. The output status line for each ref will be tab-separated and sent to stdout instead of stderr. The full symbolic names of the refs will be given. -d, --delete All listed refs are deleted from the remote repository. This is the same as prefixing all refs with a colon. --tags All refs under refs/tags are pushed, in addition to refspecs explicitly listed on the command line. --follow-tags Push all the refs that would be pushed without this option, and also push annotated tags in refs/tags that are missing from the remote but are pointing at commit-ish that are reachable from the refs being pushed. This can also be specified with configuration variable push.followTags. For more information, see push.followTags in git-config(1). --[no-]signed, --signed=(true|false|if-asked) GPG-sign the push request to update refs on the receiving side, to allow it to be checked by the hooks and/or be logged. If false or --no-signed, no signing will be attempted. If true or --signed, the push will fail if the server does not support signed pushes. If set to if-asked, sign if and only if the server supports signed pushes. The push will also fail if the actual call to gpg --sign fails. See git-receive-pack(1) for the details on the receiving end. --[no-]atomic Use an atomic transaction on the remote side if available. Either all refs are updated, or on error, no refs are updated. If the server does not support atomic pushes the push will fail. -o <option>, --push-option=<option> Transmit the given string to the server, which passes them to the pre-receive as well as the post-receive hook. The given string must not contain a NUL or LF character. When multiple --push-option=<option> are given, they are all sent to the other side in the order listed on the command line. When no --push-option=<option> is given from the command line, the values of configuration variable push.pushOption are used instead. --receive-pack=<git-receive-pack>, --exec=<git-receive-pack> Path to the git-receive-pack program on the remote end. Sometimes useful when pushing to a remote repository over ssh, and you do not have the program in a directory on the default $PATH. --[no-]force-with-lease, --force-with-lease=<refname>, --force-with-lease=<refname>:<expect> Usually, "git push" refuses to update a remote ref that is not an ancestor of the local ref used to overwrite it. This option overrides this restriction if the current value of the remote ref is the expected value. "git push" fails otherwise. Imagine that you have to rebase what you have already published. You will have to bypass the "must fast-forward" rule in order to replace the history you originally published with the rebased history. If somebody else built on top of your original history while you are rebasing, the tip of the branch at the remote may advance with their commit, and blindly pushing with --force will lose their work. This option allows you to say that you expect the history you are updating is what you rebased and want to replace. If the remote ref still points at the commit you specified, you can be sure that no other people did anything to the ref. It is like taking a "lease" on the ref without explicitly locking it, and the remote ref is updated only if the "lease" is still valid. --force-with-lease alone, without specifying the details, will protect all remote refs that are going to be updated by requiring their current value to be the same as the remote-tracking branch we have for them. --force-with-lease=<refname>, without specifying the expected value, will protect the named ref (alone), if it is going to be updated, by requiring its current value to be the same as the remote-tracking branch we have for it. --force-with-lease=<refname>:<expect> will protect the named ref (alone), if it is going to be updated, by requiring its current value to be the same as the specified value <expect> (which is allowed to be different from the remote-tracking branch we have for the refname, or we do not even have to have such a remote-tracking branch when this form is used). If <expect> is the empty string, then the named ref must not already exist. Note that all forms other than --force-with-lease=<refname>:<expect> that specifies the expected current value of the ref explicitly are still experimental and their semantics may change as we gain experience with this feature. "--no-force-with-lease" will cancel all the previous --force-with-lease on the command line. A general note on safety: supplying this option without an expected value, i.e. as --force-with-lease or --force-with-lease=<refname> interacts very badly with anything that implicitly runs git fetch on the remote to be pushed to in the background, e.g. git fetch origin on your repository in a cronjob. The protection it offers over --force is ensuring that subsequent changes your work wasn’t based on aren’t clobbered, but this is trivially defeated if some background process is updating refs in the background. We don’t have anything except the remote tracking info to go by as a heuristic for refs you’re expected to have seen & are willing to clobber. If your editor or some other system is running git fetch in the background for you a way to mitigate this is to simply set up another remote: git remote add origin-push $(git config remote.origin.url) git fetch origin-push Now when the background process runs git fetch origin the references on origin-push won’t be updated, and thus commands like: git push --force-with-lease origin-push Will fail unless you manually run git fetch origin-push. This method is of course entirely defeated by something that runs git fetch --all, in that case you’d need to either disable it or do something more tedious like: git fetch # update 'master' from remote git tag base master # mark our base point git rebase -i master # rewrite some commits git push --force-with-lease=master:base master:master I.e. create a base tag for versions of the upstream code that you’ve seen and are willing to overwrite, then rewrite history, and finally force push changes to master if the remote version is still at base, regardless of what your local remotes/origin/master has been updated to in the background. Alternatively, specifying --force-if-includes as an ancillary option along with --force-with-lease[=<refname>] (i.e., without saying what exact commit the ref on the remote side must be pointing at, or which refs on the remote side are being protected) at the time of "push" will verify if updates from the remote-tracking refs that may have been implicitly updated in the background are integrated locally before allowing a forced update. -f, --force Usually, the command refuses to update a remote ref that is not an ancestor of the local ref used to overwrite it. Also, when --force-with-lease option is used, the command refuses to update a remote ref whose current value does not match what is expected. This flag disables these checks, and can cause the remote repository to lose commits; use it with care. Note that --force applies to all the refs that are pushed, hence using it with push.default set to matching or with multiple push destinations configured with remote.*.push may overwrite refs other than the current branch (including local refs that are strictly behind their remote counterpart). To force a push to only one branch, use a + in front of the refspec to push (e.g git push origin +master to force a push to the master branch). See the <refspec>... section above for details. --[no-]force-if-includes Force an update only if the tip of the remote-tracking ref has been integrated locally. This option enables a check that verifies if the tip of the remote-tracking ref is reachable from one of the "reflog" entries of the local branch based in it for a rewrite. The check ensures that any updates from the remote have been incorporated locally by rejecting the forced update if that is not the case. If the option is passed without specifying --force-with-lease, or specified along with --force-with-lease=<refname>:<expect>, it is a "no-op". Specifying --no-force-if-includes disables this behavior. --repo=<repository> This option is equivalent to the <repository> argument. If both are specified, the command-line argument takes precedence. -u, --set-upstream For every branch that is up to date or successfully pushed, add upstream (tracking) reference, used by argument-less git-pull(1) and other commands. For more information, see branch.<name>.merge in git-config(1). --[no-]thin These options are passed to git-send-pack(1). A thin transfer significantly reduces the amount of sent data when the sender and receiver share many of the same objects in common. The default is --thin. -q, --quiet Suppress all output, including the listing of updated refs, unless an error occurs. Progress is not reported to the standard error stream. -v, --verbose Run verbosely. --progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless -q is specified. This flag forces progress status even if the standard error stream is not directed to a terminal. --no-recurse-submodules, --recurse-submodules=check|on-demand|only|no May be used to make sure all submodule commits used by the revisions to be pushed are available on a remote-tracking branch. If check is used Git will verify that all submodule commits that changed in the revisions to be pushed are available on at least one remote of the submodule. If any commits are missing the push will be aborted and exit with non-zero status. If on-demand is used all submodules that changed in the revisions to be pushed will be pushed. If on-demand was not able to push all necessary revisions it will also be aborted and exit with non-zero status. If only is used all submodules will be pushed while the superproject is left unpushed. A value of no or using --no-recurse-submodules can be used to override the push.recurseSubmodules configuration variable when no submodule recursion is required. When using on-demand or only, if a submodule has a "push.recurseSubmodules={on-demand,only}" or "submodule.recurse" configuration, further recursion will occur. In this case, "only" is treated as "on-demand". --[no-]verify Toggle the pre-push hook (see githooks(5)). The default is --verify, giving the hook a chance to prevent the push. With --no-verify, the hook is bypassed completely. -4, --ipv4 Use IPv4 addresses only, ignoring IPv6 addresses. -6, --ipv6 Use IPv6 addresses only, ignoring IPv4 addresses.
# git push > Push commits to a remote repository. More information: https://git- > scm.com/docs/git-push. * Send local changes in the current branch to its default remote counterpart: `git push` * Send changes from a specific local branch to its remote counterpart: `git push {{remote_name}} {{local_branch}}` * Send changes from a specific local branch to its remote counterpart, and set the remote one as the default push/pull target of the local one: `git push -u {{remote_name}} {{local_branch}}` * Send changes from a specific local branch to a specific remote branch: `git push {{remote_name}} {{local_branch}}:{{remote_branch}}` * Send changes on all local branches to their counterparts in a given remote repository: `git push --all {{remote_name}}` * Delete a branch in a remote repository: `git push {{remote_name}} --delete {{remote_branch}}` * Remove remote branches that don't have a local counterpart: `git push --prune {{remote_name}}` * Publish tags that aren't yet in the remote repository: `git push --tags`
lpstat
lpstat displays status information about the current classes, jobs, and printers. When run with no arguments, lpstat will list active jobs queued by the current user. The lpstat command supports the following options: -E Forces encryption when connecting to the server. -H Shows the server hostname and port. -R Shows the ranking of print jobs. -U username Specifies an alternate username. -W which-jobs Specifies which jobs to show, "completed" or "not-completed" (the default). This option must appear before the -o option and/or any printer names, otherwise the default ("not- completed") value will be used in the request to the scheduler. -a [printer(s)] Shows the accepting state of printer queues. If no printers are specified then all printers are listed. -c [class(es)] Shows the printer classes and the printers that belong to them. If no classes are specified then all classes are listed. -d Shows the current default destination. -e Shows all available destinations on the local network. -h server[:port] Specifies an alternate server. -l Shows a long listing of printers, classes, or jobs. -o [destination(s)] Shows the jobs queued on the specified destinations. If no destinations are specified all jobs are shown. -p [printer(s)] Shows the printers and whether they are enabled for printing. If no printers are specified then all printers are listed. -r Shows whether the CUPS server is running. -s Shows a status summary, including the default destination, a list of classes and their member printers, and a list of printers and their associated devices. This is equivalent to using the -d, -c, and -v options. -t Shows all status information. This is equivalent to using the -r, -d, -c, -v, -a, -p, and -o options. -u [user(s)] Shows a list of print jobs queued by the specified users. If no users are specified, lists the jobs queued by the current user. -v [printer(s)] Shows the printers and what device they are attached to. If no printers are specified then all printers are listed.
# lpstat > Display status information about the current classes, jobs, and printers. > More information: https://ss64.com/osx/lpstat.html. * Show a long listing of printers, classes, and jobs: `lpstat -l` * Force encryption when connecting to the CUPS server: `lpstat -E` * Show the ranking of print jobs: `lpstat -R` * Show whether or not the CUPS server is running: `lpstat -r` * Show all status information: `lpstat -t`
find
This manual page documents the GNU version of find. GNU find searches the directory tree rooted at each given starting-point by evaluating the given expression from left to right, according to the rules of precedence (see section OPERATORS), until the outcome is known (the left hand side is false for and operations, true for or), at which point find moves on to the next file name. If no starting-point is specified, `.' is assumed. If you are using find in an environment where security is important (for example if you are using it to search directories that are writable by other users), you should read the `Security Considerations' chapter of the findutils documentation, which is called Finding Files and comes with findutils. That document also includes a lot more detail and discussion than this manual page, so you may find it a more useful source of information. The -H, -L and -P options control the treatment of symbolic links. Command-line arguments following these are taken to be names of files or directories to be examined, up to the first argument that begins with `-', or the argument `(' or `!'. That argument and any following arguments are taken to be the expression describing what is to be searched for. If no paths are given, the current directory is used. If no expression is given, the expression -print is used (but you should probably consider using -print0 instead, anyway). This manual page talks about `options' within the expression list. These options control the behaviour of find but are specified immediately after the last path name. The five `real' options -H, -L, -P, -D and -O must appear before the first path name, if at all. A double dash -- could theoretically be used to signal that any remaining arguments are not options, but this does not really work due to the way find determines the end of the following path arguments: it does that by reading until an expression argument comes (which also starts with a `-'). Now, if a path argument would start with a `-', then find would treat it as expression argument instead. Thus, to ensure that all start points are taken as such, and especially to prevent that wildcard patterns expanded by the calling shell are not mistakenly treated as expression arguments, it is generally safer to prefix wildcards or dubious path names with either `./' or to use absolute path names starting with '/'. Alternatively, it is generally safe though non-portable to use the GNU option -files0-from to pass arbitrary starting points to find. -P Never follow symbolic links. This is the default behaviour. When find examines or prints information about files, and the file is a symbolic link, the information used shall be taken from the properties of the symbolic link itself. -L Follow symbolic links. When find examines or prints information about files, the information used shall be taken from the properties of the file to which the link points, not from the link itself (unless it is a broken symbolic link or find is unable to examine the file to which the link points). Use of this option implies -noleaf. If you later use the -P option, -noleaf will still be in effect. If -L is in effect and find discovers a symbolic link to a subdirectory during its search, the subdirectory pointed to by the symbolic link will be searched. When the -L option is in effect, the -type predicate will always match against the type of the file that a symbolic link points to rather than the link itself (unless the symbolic link is broken). Actions that can cause symbolic links to become broken while find is executing (for example -delete) can give rise to confusing behaviour. Using -L causes the -lname and -ilname predicates always to return false. -H Do not follow symbolic links, except while processing the command line arguments. When find examines or prints information about files, the information used shall be taken from the properties of the symbolic link itself. The only exception to this behaviour is when a file specified on the command line is a symbolic link, and the link can be resolved. For that situation, the information used is taken from whatever the link points to (that is, the link is followed). The information about the link itself is used as a fallback if the file pointed to by the symbolic link cannot be examined. If -H is in effect and one of the paths specified on the command line is a symbolic link to a directory, the contents of that directory will be examined (though of course -maxdepth 0 would prevent this). If more than one of -H, -L and -P is specified, each overrides the others; the last one appearing on the command line takes effect. Since it is the default, the -P option should be considered to be in effect unless either -H or -L is specified. GNU find frequently stats files during the processing of the command line itself, before any searching has begun. These options also affect how those arguments are processed. Specifically, there are a number of tests that compare files listed on the command line against a file we are currently considering. In each case, the file specified on the command line will have been examined and some of its properties will have been saved. If the named file is in fact a symbolic link, and the -P option is in effect (or if neither -H nor -L were specified), the information used for the comparison will be taken from the properties of the symbolic link. Otherwise, it will be taken from the properties of the file the link points to. If find cannot follow the link (for example because it has insufficient privileges or the link points to a nonexistent file) the properties of the link itself will be used. When the -H or -L options are in effect, any symbolic links listed as the argument of -newer will be dereferenced, and the timestamp will be taken from the file to which the symbolic link points. The same consideration applies to -newerXY, -anewer and -cnewer. The -follow option has a similar effect to -L, though it takes effect at the point where it appears (that is, if -L is not used but -follow is, any symbolic links appearing after -follow on the command line will be dereferenced, and those before it will not). -D debugopts Print diagnostic information; this can be helpful to diagnose problems with why find is not doing what you want. The list of debug options should be comma separated. Compatibility of the debug options is not guaranteed between releases of findutils. For a complete list of valid debug options, see the output of find -D help. Valid debug options include exec Show diagnostic information relating to -exec, -execdir, -ok and -okdir opt Prints diagnostic information relating to the optimisation of the expression tree; see the -O option. rates Prints a summary indicating how often each predicate succeeded or failed. search Navigate the directory tree verbosely. stat Print messages as files are examined with the stat and lstat system calls. The find program tries to minimise such calls. tree Show the expression tree in its original and optimised form. all Enable all of the other debug options (but help). help Explain the debugging options. -Olevel Enables query optimisation. The find program reorders tests to speed up execution while preserving the overall effect; that is, predicates with side effects are not reordered relative to each other. The optimisations performed at each optimisation level are as follows. 0 Equivalent to optimisation level 1. 1 This is the default optimisation level and corresponds to the traditional behaviour. Expressions are reordered so that tests based only on the names of files (for example -name and -regex) are performed first. 2 Any -type or -xtype tests are performed after any tests based only on the names of files, but before any tests that require information from the inode. On many modern versions of Unix, file types are returned by readdir() and so these predicates are faster to evaluate than predicates which need to stat the file first. If you use the -fstype FOO predicate and specify a filesystem type FOO which is not known (that is, present in `/etc/mtab') at the time find starts, that predicate is equivalent to -false. 3 At this optimisation level, the full cost-based query optimiser is enabled. The order of tests is modified so that cheap (i.e. fast) tests are performed first and more expensive ones are performed later, if necessary. Within each cost band, predicates are evaluated earlier or later according to whether they are likely to succeed or not. For -o, predicates which are likely to succeed are evaluated earlier, and for -a, predicates which are likely to fail are evaluated earlier. The cost-based optimiser has a fixed idea of how likely any given test is to succeed. In some cases the probability takes account of the specific nature of the test (for example, -type f is assumed to be more likely to succeed than -type c). The cost-based optimiser is currently being evaluated. If it does not actually improve the performance of find, it will be removed again. Conversely, optimisations that prove to be reliable, robust and effective may be enabled at lower optimisation levels over time. However, the default behaviour (i.e. optimisation level 1) will not be changed in the 4.3.x release series. The findutils test suite runs all the tests on find at each optimisation level and ensures that the result is the same.
# find > Find files or directories under the given directory tree, recursively. More > information: https://manned.org/find. * Find files by extension: `find {{root_path}} -name '{{*.ext}}'` * Find files matching multiple path/name patterns: `find {{root_path}} -path '{{**/path/**/*.ext}}' -or -name '{{*pattern*}}'` * Find directories matching a given name, in case-insensitive mode: `find {{root_path}} -type d -iname '{{*lib*}}'` * Find files matching a given pattern, excluding specific paths: `find {{root_path}} -name '{{*.py}}' -not -path '{{*/site-packages/*}}'` * Find files matching a given size range, limiting the recursive depth to "1": `find {{root_path}} -maxdepth 1 -size {{+500k}} -size {{-10M}}` * Run a command for each file (use `{}` within the command to access the filename): `find {{root_path}} -name '{{*.ext}}' -exec {{wc -l {} }}\;` * Find files modified in the last 7 days: `find {{root_path}} -daystart -mtime -{{7}}` * Find empty (0 byte) files and delete them: `find {{root_path}} -type {{f}} -empty -delete`
flock
This utility manages flock(2) locks from within shell scripts or from the command line. The first and second of the above forms wrap the lock around the execution of a command, in a manner similar to su(1) or newgrp(1). They lock a specified file or directory, which is created (assuming appropriate permissions) if it does not already exist. By default, if the lock cannot be immediately acquired, flock waits until the lock is available. The third form uses an open file by its file descriptor number. See the examples below for how that can be used. -c, --command command Pass a single command, without arguments, to the shell with -c. -E, --conflict-exit-code number The exit status used when the -n option is in use, and the conflicting lock exists, or the -w option is in use, and the timeout is reached. The default value is 1. The number has to be in the range of 0 to 255. -F, --no-fork Do not fork before executing command. Upon execution the flock process is replaced by command which continues to hold the lock. This option is incompatible with --close as there would otherwise be nothing left to hold the lock. -e, -x, --exclusive Obtain an exclusive lock, sometimes called a write lock. This is the default. -n, --nb, --nonblock Fail rather than wait if the lock cannot be immediately acquired. See the -E option for the exit status used. -o, --close Close the file descriptor on which the lock is held before executing command. This is useful if command spawns a child process which should not be holding the lock. -s, --shared Obtain a shared lock, sometimes called a read lock. -u, --unlock Drop a lock. This is usually not required, since a lock is automatically dropped when the file is closed. However, it may be required in special cases, for example if the enclosed command group may have forked a background process which should not be holding the lock. -w, --wait, --timeout seconds Fail if the lock cannot be acquired within seconds. Decimal fractional values are allowed. See the -E option for the exit status used. The zero number of seconds is interpreted as --nonblock. --verbose Report how long it took to acquire the lock, or why the lock could not be obtained. -h, --help Display help text and exit. -V, --version Print version and exit.
# flock > Manage locks from shell scripts. It can be used to ensure that only one > process of a command is running. More information: https://manned.org/flock. * Run a command with a file lock as soon as the lock is not required by others: `flock {{path/to/lock.lock}} --command "{{command}}"` * Run a command with a file lock, and exit if the lock doesn't exist: `flock {{path/to/lock.lock}} --nonblock --command "{{command}}"` * Run a command with a file lock, and exit with a specific error code if the lock doesn't exist: `flock {{path/to/lock.lock}} --nonblock --conflict-exit-code {{error_code}} -c "{{command}}"`
ssh-add
ssh-add adds private key identities to the authentication agent, ssh-agent(1). When run without arguments, it adds the files ~/.ssh/id_rsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ecdsa_sk, ~/.ssh/id_ed25519, ~/.ssh/id_ed25519_sk, and ~/.ssh/id_dsa. After loading a private key, ssh-add will try to load corresponding certificate information from the filename obtained by appending -cert.pub to the name of the private key file. Alternative file names can be given on the command line. If any file requires a passphrase, ssh-add asks for the passphrase from the user. The passphrase is read from the user's tty. ssh-add retries the last passphrase if multiple identity files are given. The authentication agent must be running and the SSH_AUTH_SOCK environment variable must contain the name of its socket for ssh-add to work. The options are as follows: -c Indicates that added identities should be subject to confirmation before being used for authentication. Confirmation is performed by ssh-askpass(1). Successful confirmation is signaled by a zero exit status from ssh-askpass(1), rather than text entered into the requester. -D Deletes all identities from the agent. -d Instead of adding identities, removes identities from the agent. If ssh-add has been run without arguments, the keys for the default identities and their corresponding certificates will be removed. Otherwise, the argument list will be interpreted as a list of paths to public key files to specify keys and certificates to be removed from the agent. If no public key is found at a given path, ssh-add will append .pub and retry. If the argument list consists of “-” then ssh-add will read public keys to be removed from standard input. -E fingerprint_hash Specifies the hash algorithm used when displaying key fingerprints. Valid options are: “md5” and “sha256”. The default is “sha256”. -e pkcs11 Remove keys provided by the PKCS#11 shared library pkcs11. -H hostkey_file Specifies a known hosts file to look up hostkeys when using destination-constrained keys via the -h flag. This option may be specified multiple times to allow multiple files to be searched. If no files are specified, ssh-add will use the default ssh_config(5) known hosts files: ~/.ssh/known_hosts, ~/.ssh/known_hosts2, /etc/ssh/ssh_known_hosts, and /etc/ssh/ssh_known_hosts2. -h destination_constraint When adding keys, constrain them to be usable only through specific hosts or to specific destinations. Destination constraints of the form ‘[user@]dest-hostname’ permit use of the key only from the origin host (the one running ssh-agent(1)) to the listed destination host, with optional user name. Constraints of the form ‘src-hostname>[user@]dst-hostname’ allow a key available on a forwarded ssh-agent(1) to be used through a particular host (as specified by ‘src-hostname’) to authenticate to a further host, specified by ‘dst-hostname’. Multiple destination constraints may be added when loading keys. When attempting authentication with a key that has destination constraints, the whole connection path, including ssh-agent(1) forwarding, is tested against those constraints and each hop must be permitted for the attempt to succeed. For example, if key is forwarded to a remote host, ‘host-b’, and is attempting authentication to another host, ‘host-c’, then the operation will be successful only if ‘host-b’ was permitted from the origin host and the subsequent ‘host-b>host-c’ hop is also permitted by destination constraints. Hosts are identified by their host keys, and are looked up from known hosts files by ssh-add. Wildcards patterns may be used for hostnames and certificate host keys are supported. By default, keys added by ssh-add are not destination constrained. Destination constraints were added in OpenSSH release 8.9. Support in both the remote SSH client and server is required when using destination-constrained keys over a forwarded ssh-agent(1) channel. It is also important to note that destination constraints can only be enforced by ssh-agent(1) when a key is used, or when it is forwarded by a cooperating ssh(1). Specifically, it does not prevent an attacker with access to a remote SSH_AUTH_SOCK from forwarding it again and using it on a different host (but only to a permitted destination). -K Load resident keys from a FIDO authenticator. -k When loading keys into or deleting keys from the agent, process plain private keys only and skip certificates. -L Lists public key parameters of all identities currently represented by the agent. -l Lists fingerprints of all identities currently represented by the agent. -q Be quiet after a successful operation. -S provider Specifies a path to a library that will be used when adding FIDO authenticator-hosted keys, overriding the default of using the internal USB HID support. -s pkcs11 Add keys provided by the PKCS#11 shared library pkcs11. -T pubkey ... Tests whether the private keys that correspond to the specified pubkey files are usable by performing sign and verify operations on each. -t life Set a maximum lifetime when adding identities to an agent. The lifetime may be specified in seconds or in a time format specified in sshd_config(5). -v Verbose mode. Causes ssh-add to print debugging messages about its progress. This is helpful in debugging problems. Multiple -v options increase the verbosity. The maximum is 3. -X Unlock the agent. -x Lock the agent with a password.
# ssh-add > Manage loaded ssh keys in the ssh-agent. Ensure that ssh-agent is up and > running for the keys to be loaded in it. More information: > https://man.openbsd.org/ssh-add. * Add the default ssh keys in `~/.ssh` to the ssh-agent: `ssh-add` * Add a specific key to the ssh-agent: `ssh-add {{path/to/private_key}}` * List fingerprints of currently loaded keys: `ssh-add -l` * Delete a key from the ssh-agent: `ssh-add -d {{path/to/private_key}}` * Delete all currently loaded keys from the ssh-agent: `ssh-add -D` * Add a key to the ssh-agent and the keychain: `ssh-add -K {{path/to/private_key}}`
git-show-branch
Shows the commit ancestry graph starting from the commits named with <rev>s or <glob>s (or all refs under refs/heads and/or refs/tags) semi-visually. It cannot show more than 29 branches and commits at a time. It uses showbranch.default multi-valued configuration items if no <rev> or <glob> is given on the command line. <rev> Arbitrary extended SHA-1 expression (see gitrevisions(7)) that typically names a branch head or a tag. <glob> A glob pattern that matches branch or tag names under refs/. For example, if you have many topic branches under refs/heads/topic, giving topic/* would show all of them. -r, --remotes Show the remote-tracking branches. -a, --all Show both remote-tracking branches and local branches. --current With this option, the command includes the current branch to the list of revs to be shown when it is not given on the command line. --topo-order By default, the branches and their commits are shown in reverse chronological order. This option makes them appear in topological order (i.e., descendant commits are shown before their parents). --date-order This option is similar to --topo-order in the sense that no parent comes before all of its children, but otherwise commits are ordered according to their commit date. --sparse By default, the output omits merges that are reachable from only one tip being shown. This option makes them visible. --more=<n> Usually the command stops output upon showing the commit that is the common ancestor of all the branches. This flag tells the command to go <n> more common commits beyond that. When <n> is negative, display only the <ref>s given, without showing the commit ancestry tree. --list Synonym to --more=-1 --merge-base Instead of showing the commit list, determine possible merge bases for the specified commits. All merge bases will be contained in all specified commits. This is different from how git-merge-base(1) handles the case of three or more commits. --independent Among the <ref>s given, display only the ones that cannot be reached from any other <ref>. --no-name Do not show naming strings for each commit. --sha1-name Instead of naming the commits using the path to reach them from heads (e.g. "master~2" to mean the grandparent of "master"), name them with the unique prefix of their object names. --topics Shows only commits that are NOT on the first branch given. This helps track topic branches by hiding any commit that is already in the main line of development. When given "git show-branch --topics master topic1 topic2", this will show the revisions given by "git rev-list ^master topic1 topic2" -g, --reflog[=<n>[,<base>]] [<ref>] Shows <n> most recent ref-log entries for the given ref. If <base> is given, <n> entries going back from that entry. <base> can be specified as count or date. When no explicit <ref> parameter is given, it defaults to the current branch (or HEAD if it is detached). --color[=<when>] Color the status sign (one of these: * ! + -) of each commit corresponding to the branch it’s in. The value must be always (the default), never, or auto. --no-color Turn off colored output, even when the configuration file gives the default to color output. Same as --color=never. Note that --more, --list, --independent and --merge-base options are mutually exclusive.
# git show-branch > Show branches and their commits. More information: https://git- > scm.com/docs/git-show-branch. * Show a summary of the latest commit on a branch: `git show-branch {{branch_name|ref|commit}}` * Compare commits in the history of multiple commits or branches: `git show-branch {{branch_name|ref|commit}}` * Compare all remote tracking branches: `git show-branch --remotes` * Compare both local and remote tracking branches: `git show-branch --all` * List the latest commits in all branches: `git show-branch --all --list` * Compare a given branch with the current branch: `git show-branch --current {{commit|branch_name|ref}}` * Display the commit name instead of the relative name: `git show-branch --sha1-name --current {{current|branch_name|ref}}` * Keep going a given number of commits past the common ancestor: `git show-branch --more {{5}} {{commit|branch_name|ref}} {{commit|branch_name|ref}} {{...}}`
gawk
Gawk is the GNU Project's implementation of the AWK programming language. It conforms to the definition of the language in the POSIX 1003.1 standard. This version in turn is based on the description in The AWK Programming Language, by Aho, Kernighan, and Weinberger. Gawk provides the additional features found in the current version of Brian Kernighan's awk and numerous GNU- specific extensions. The command line consists of options to gawk itself, the AWK program text (if not supplied via the -f or --include options), and values to be made available in the ARGC and ARGV pre-defined AWK variables. Gawk accepts the following options. Standard options are listed first, followed by options for gawk extensions, listed alphabetically by short option. -f program-file, --file program-file Read the AWK program source from the file program-file, instead of from the first command line argument. Multiple -f options may be used. Files read with -f are treated as if they begin with an implicit @namespace "awk" statement. -F fs, --field-separator fs Use fs for the input field separator (the value of the FS predefined variable). -v var=val, --assign var=val Assign the value val to the variable var, before execution of the program begins. Such variable values are available to the BEGIN rule of an AWK program. -b, --characters-as-bytes Treat all input data as single-byte characters. The --posix option overrides this one. -c, --traditional Run in compatibility mode. In compatibility mode, gawk behaves identically to Brian Kernighan's awk; none of the GNU-specific extensions are recognized. -C, --copyright Print the short version of the GNU copyright information message on the standard output and exit successfully. -d[file], --dump-variables[=file] Print a sorted list of global variables, their types and final values to file. The default file is awkvars.out in the current directory. -D[file], --debug[=file] Enable debugging of AWK programs. By default, the debugger reads commands interactively from the keyboard (standard input). The optional file argument specifies a file with a list of commands for the debugger to execute non-interactively. In this mode of execution, gawk loads the AWK source code and then prompts for debugging commands. Gawk can only debug AWK program source provided with the -f and --include options. The debugger is documented in GAWK: Effective AWK Programming; see https://www.gnu.org/software/gawk/manual/html_node/Debugger.html#Debugger . -e program-text, --source program-text Use program-text as AWK program source code. Each argument supplied via -e is treated as if it begins with an implicit @namespace "awk" statement. -E file, --exec file Similar to -f, however, this option is the last one processed. This should be used with #! scripts, particularly for CGI applications, to avoid passing in options or source code (!) on the command line from a URL. This option disables command-line variable assignments. -g, --gen-pot Scan and parse the AWK program, and generate a GNU .pot (Portable Object Template) format file on standard output with entries for all localizable strings in the program. The program itself is not executed. -h, --help Print a relatively short summary of the available options on the standard output. Per the GNU Coding Standards, these options cause an immediate, successful exit. -i include-file, --include include-file Load an awk source library. This searches for the library using the AWKPATH environment variable. If the initial search fails, another attempt will be made after appending the .awk suffix. The file will be loaded only once (i.e., duplicates are eliminated), and the code does not constitute the main program source. Files read with --include are treated as if they begin with an implicit @namespace "awk" statement. -I, --trace Print the internal byte code names as they are executed when running the program. The trace is printed to standard error. Each ``op code'' is preceded by a + sign in the output. -k, --csv Enable CSV special processing. See the manual for details. FIXME: eventually provide a URL here. -l lib, --load lib Load a gawk extension from the shared library lib. This searches for the library using the AWKLIBPATH environment variable. If the initial search fails, another attempt will be made after appending the default shared library suffix for the platform. The library initialization routine is expected to be named dl_load(). -L [value], --lint[=value] Provide warnings about constructs that are dubious or non- portable to other AWK implementations. See https://www.gnu.org/software/gawk/manual/html_node/Options.html#Options for the list of possible values for value. -M, --bignum Force arbitrary precision arithmetic on numbers. This option has no effect if gawk is not compiled to use the GNU MPFR and GMP libraries. (In such a case, gawk issues a warning.) NOTE: This feature is on parole. The primary gawk maintainer is no longer supporting it, although there is a member of the development team who is. If this situation changes, the feature will be removed from gawk. -n, --non-decimal-data Recognize octal and hexadecimal values in input data. Use this option with great caution! -N, --use-lc-numeric Force gawk to use the locale's decimal point character when parsing input data. -o[file], --pretty-print[=file] Output a pretty printed version of the program to file. The default file is awkprof.out in the current directory. This option implies --no-optimize. -O, --optimize Enable gawk's default optimizations upon the internal representation of the program. This option is on by default. -p[prof-file], --profile[=prof-file] Start a profiling session, and send the profiling data to prof-file. The default is awkprof.out in the current directory. The profile contains execution counts of each statement in the program in the left margin and function call counts for each user-defined function. Gawk runs more slowly in this mode. This option implies --no-optimize. -P, --posix This turns on compatibility mode, and disables a number of common extensions. -r, --re-interval Enable the use of interval expressions in regular expression matching. Interval expressions are enabled by default, but this option remains for backwards compatibility. -s, --no-optimize Disable gawk's default optimizations upon the internal representation of the program. -S, --sandbox Run gawk in sandbox mode, disabling the system() function, input redirection with getline, output redirection with print and printf, and loading dynamic extensions. Command execution (through pipelines) is also disabled. -t, --lint-old Provide warnings about constructs that are not portable to the original version of UNIX awk. -V, --version Print version information for this particular copy of gawk on the standard output. This is useful when reporting bugs. Per the GNU Coding Standards, these options cause an immediate, successful exit. -- Signal the end of options. This is useful to allow further arguments to the AWK program itself to start with a “-”. In compatibility mode, any other options are flagged as invalid, but are otherwise ignored. In normal operation, as long as program text has been supplied, unknown options are passed on to the AWK program in the ARGV array for processing. For POSIX compatibility, the -W option may be used, followed by the name of a long option.
# gawk > This command is an alias of GNU `awk`. * View documentation for the original command: `tldr -p linux awk`
trap
If the first operand is an unsigned decimal integer, the shell shall treat all operands as conditions, and shall reset each condition to the default value. Otherwise, if there are operands, the first is treated as an action and the remaining as conditions. If action is '-', the shell shall reset each condition to the default value. If action is null (""), the shell shall ignore each specified condition if it arises. Otherwise, the argument action shall be read and executed by the shell when one of the corresponding conditions arises. The action of trap shall override a previous action (either default action or one explicitly set). The value of "$?" after the trap action completes shall be the value it had before trap was invoked. The condition can be EXIT, 0 (equivalent to EXIT), or a signal specified using a symbolic name, without the SIG prefix, as listed in the tables of signal names in the <signal.h> header defined in the Base Definitions volume of POSIX.1‐2017, Chapter 13, Headers; for example, HUP, INT, QUIT, TERM. Implementations may permit names with the SIG prefix or ignore case in signal names as an extension. Setting a trap for SIGKILL or SIGSTOP produces undefined results. The environment in which the shell executes a trap on EXIT shall be identical to the environment immediately after the last command executed before the trap on EXIT was taken. Each time trap is invoked, the action argument shall be processed in a manner equivalent to: eval action Signals that were ignored on entry to a non-interactive shell cannot be trapped or reset, although no error need be reported when attempting to do so. An interactive shell may reset or catch signals ignored on entry. Traps shall remain in place for a given shell until explicitly changed with another trap command. When a subshell is entered, traps that are not being ignored shall be set to the default actions, except in the case of a command substitution containing only a single trap command, when the traps need not be altered. Implementations may check for this case using only lexical analysis; for example, if `trap` and $( trap -- ) do not alter the traps in the subshell, cases such as assigning var=trap and then using $($var) may still alter them. This does not imply that the trap command cannot be used within the subshell to set new traps. The trap command with no operands shall write to standard output a list of commands associated with each condition. If the command is executed in a subshell, the implementation does not perform the optional check described above for a command substitution containing only a single trap command, and no trap commands with operands have been executed since entry to the subshell, the list shall contain the commands that were associated with each condition immediately before the subshell environment was entered. Otherwise, the list shall contain the commands currently associated with each condition. The format shall be: "trap -- %s %s ...\n", <action>, <condition> ... The shell shall format the output, including the proper use of quoting, so that it is suitable for reinput to the shell as commands that achieve the same trapping results. For example: save_traps=$(trap) ... eval "$save_traps" XSI-conformant systems also allow numeric signal numbers for the conditions corresponding to the following signal names: 1 SIGHUP 2 SIGINT 3 SIGQUIT 6 SIGABRT 9 SIGKILL 14 SIGALRM 15 SIGTERM The trap special built-in shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. None.
# trap > Automatically execute commands after receiving signals by processes or the > operating system. Can be used to perform cleanups for interruptions by the > user or other actions. More information: https://manned.org/trap. * List available signals to set traps for: `trap -l` * List active traps for the current shell: `trap -p` * Set a trap to execute commands when one or more signals are detected: `trap 'echo "Caught signal {{SIGHUP}}"' {{SIGHUP}}` * Remove active traps: `trap - {{SIGHUP}} {{SIGINT}}`
git-whatchanged
Shows commit logs and diff output each commit introduces. New users are encouraged to use git-log(1) instead. The whatchanged command is essentially the same as git-log(1) but defaults to show the raw format diff output and to skip merges. The command is kept primarily for historical reasons; fingers of many people who learned Git long before git log was invented by reading Linux kernel mailing list are trained to type it.
# git whatchanged > Show what has changed with recent commits or files. See also `git log`. More > information: https://git-scm.com/docs/git-whatchanged. * Display logs and changes for recent commits: `git whatchanged` * Display logs and changes for recent commits within the specified time frame: `git whatchanged --since="{{2 hours ago}}"` * Display logs and changes for recent commits for specific files or directories: `git whatchanged {{path/to/file_or_directory}}`
troff
GNU troff transforms groff(7) language input into the device- independent output format described in groff_out(5); troff is thus the heart of the GNU roff document formatting system. If no file operands are given on the command line, or if file is “-”, the standard input stream is read. GNU troff is functionally compatible with the AT&T troff typesetter and features numerous extensions. Many people prefer to use the groff(1) command, a front end which also runs preprocessors and output drivers in the appropriate order and with appropriate options. -h and --help display a usage message, while -v and --version show version information; all exit afterward. -a Generate a plain text approximation of the typeset output. The read-only register .A is set to 1. This option produces a sort of abstract preview of the formatted output. • Page breaks are marked by a phrase in angle brackets; for example, “<beginning of page>”. • Lines are broken where they would be in the formatted output. • A horizontal motion of any size is represented as one space. Adjacent horizontal motions are not combined. Inter-sentence space nodes (those arising from the second argument to the .ss request) are not represented. • Vertical motions are not represented. • Special characters are rendered in angle brackets; for example, the default soft hyphen character appears as “<hy>”. The above description should not be considered a specification; the details of -a output are subject to change. -b Write a backtrace reporting the state of troff's input parser to the standard error stream with each diagnostic message. The line numbers given in the backtrace might not always be correct, because troff's idea of line numbers can be confused by requests that append to macros. -c Start with color output disabled. -C Enable AT&T troff compatibility mode; implies -c. See groff_diff(7). -d ctext -d string=text Define roff string c or string as text. c must be one character; string can be of arbitrary length. Such string assignments happen before any macro file is loaded, including the startup file. Due to getopt_long(3) limitations, c cannot be, and string cannot contain, an equals sign, even though that is a valid character in a roff identifier. -E Inhibit troff error messages; implies -Ww. This option does not suppress messages sent to the standard error stream by documents or macro packages using tm or related requests. -f fam Use fam as the default font family. -F dir Search in directory dir for the selected output device's directory of device and font description files. See the description of GROFF_FONT_PATH in section “Environment” below for the default search locations and ordering. -i Read the standard input stream after all named input files have been processed. -I dir Search the directory dir for files (those named on the command line; in psbb, so, and soquiet requests; and in “\X'ps: import'”, “\X'ps: file'”, and “\X'pdf: pdfpic'” device control escape sequences). -I may be specified more than once; each dir is searched in the given order. To search the current working directory before others, add “-I .” at the desired place; it is otherwise searched last. -I works similarly to, and is named for, the “include” option of Unix C compilers. -m name Process the file name.tmac prior to any input files. If not found, tmac.name is attempted. name (in both arrangements) is presumed to be a macro file; see the description of GROFF_TMAC_PATH in section “Environment” below for the default search locations and ordering. -M dir Search directory dir for macro files. See the description of GROFF_TMAC_PATH in section “Environment” below for the default search locations and ordering. -n num Begin numbering pages at num. The default is 1. -o list Output only pages in list, which is a comma-separated list of inclusive page ranges; n means page n, m-n means every page between m and n, -n means every page up to n, and n- means every page from n on. troff stops processing and exits after formatting the last page enumerated in list. -r cnumeric-expression -r register=numeric-expression Define roff register c or register as numeric-expression. c must be a one-character name; register can be of arbitrary length. Such register assignments happen before any macro file is loaded, including the startup file. Due to getopt_long(3) limitations, c cannot be, and register cannot contain, an equals sign, even though that is a valid character in a roff identifier. -R Don't load troffrc and troffrc-end. -T dev Prepare output for device dev. The default is ps; see groff(1). -U Operate in unsafe mode, enabling the open, opena, pi, pso, and sy requests, which are disabled by default because they allow an untrusted input document to write to arbitrary file names and run arbitrary commands. This option also adds the current directory to the macro package search path; see the -m and -M options above. -w name -W name Enable (-w) or inhibit (-W) warnings in category name. See section “Warnings” below. -z Suppress formatted output.
# troff > Typesetting processor for the groff (GNU Troff) document formatting system. > See also `groff`. More information: https://manned.org/troff. * Format output for a PostScript printer, saving the output to a file: `troff {{path/to/input.roff}} | grops > {{path/to/output.ps}}` * Format output for a PostScript printer using the [me] macro package, saving the output to a file: `troff -{{me}} {{path/to/input.roff}} | grops > {{path/to/output.ps}}` * Format output as [a]SCII text using the [man] macro package: `troff -T {{ascii}} -{{man}} {{path/to/input.roff}} | grotty` * Format output as a [pdf] file, saving the output to a file: `troff -T {{pdf}} {{path/to/input.roff}} | gropdf > {{path/to/output.pdf}}`
ar
The ar utility is part of the Software Development Utilities option. The ar utility can be used to create and maintain groups of files combined into an archive. Once an archive has been created, new files can be added, and existing files in an archive can be extracted, deleted, or replaced. When an archive consists entirely of valid object files, the implementation shall format the archive so that it is usable as a library for link editing (see c99 and fort77). When some of the archived files are not valid object files, the suitability of the archive for library use is undefined. If an archive consists entirely of printable files, the entire archive shall be printable. When ar creates an archive, it creates administrative information indicating whether a symbol table is present in the archive. When there is at least one object file that ar recognizes as such in the archive, an archive symbol table shall be created in the archive and maintained by ar; it is used by the link editor to search the archive. Whenever the ar utility is used to create or update the contents of such an archive, the symbol table shall be rebuilt. The -s option shall force the symbol table to be rebuilt. All file operands can be pathnames. However, files within archives shall be named by a filename, which is the last component of the pathname used when the file was entered into the archive. The comparison of file operands to the names of files in archives shall be performed by comparing the last component of the operand to the name of the file in the archive. It is unspecified whether multiple files in the archive may be identically named. In the case of such files, however, each file and posname operand shall match only the first file in the archive having a name that is the same as the last component of the operand. The ar utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for Guideline 9. The following options shall be supported: -a Position new files in the archive after the file named by the posname operand. -b Position new files in the archive before the file named by the posname operand. -c Suppress the diagnostic message that is written to standard error by default when the archive archive is created. -C Prevent extracted files from replacing like-named files in the file system. This option is useful when -T is also used, to prevent truncated filenames from replacing files with the same prefix. -d Delete one or more files from archive. -i Position new files in the archive before the file in the archive named by the posname operand (equivalent to -b). -m Move the named files in the archive. The -a, -b, or -i options with the posname operand indicate the position; otherwise, move the names files in the archive to the end of the archive. -p Write the contents of the files in the archive named by file operands from archive to the standard output. If no file operands are specified, the contents of all files in the archive shall be written in the order of the archive. -q Append the named files to the end of the archive. In this case ar does not check whether the added files are already in the archive. This is useful to bypass the searching otherwise done when creating a large archive piece by piece. -r Replace or add files to archive. If the archive named by archive does not exist, a new archive shall be created and a diagnostic message shall be written to standard error (unless the -c option is specified). If no files are specified and the archive exists, the results are undefined. Files that replace existing files in the archive shall not change the order of the archive. Files that do not replace existing files in the archive shall be appended to the archive unless a -a, -b, or -i option specifies another position. -s Force the regeneration of the archive symbol table even if ar is not invoked with an option that modifies the archive contents. This option is useful to restore the archive symbol table after it has been stripped; see strip. -t Write a table of contents of archive to the standard output. Only the files specified by the file operands shall be included in the written list. If no file operands are specified, all files in archive shall be included in the order of the archive. -T Allow filename truncation of extracted files whose archive names are longer than the file system can support. By default, extracting a file with a name that is too long shall be an error; a diagnostic message shall be written and the file shall not be extracted. -u Update older files in the archive. When used with the -r option, files in the archive shall be replaced only if the corresponding file has a modification time that is at least as new as the modification time of the file in the archive. -v Give verbose output. When used with the option characters -d, -r, or -x, write a detailed file-by-file description of the archive creation and maintenance activity, as described in the STDOUT section. When used with -p, write the name of the file in the archive to the standard output before writing the file in the archive itself to the standard output, as described in the STDOUT section. When used with -t, include a long listing of information about the files in the archive, as described in the STDOUT section. -x Extract the files in the archive named by the file operands from archive. The contents of the archive shall not be changed. If no file operands are given, all files in the archive shall be extracted. The modification time of each file extracted shall be set to the time the file is extracted from the archive.
# ar > Create, modify, and extract from Unix archives. Typically used for static > libraries (`.a`) and Debian packages (`.deb`). See also: `tar`. More > information: https://manned.org/ar. * E[x]tract all members from an archive: `ar x {{path/to/file.a}}` * Lis[t] contents in a specific archive: `ar t {{path/to/file.ar}}` * [r]eplace or add specific files to an archive: `ar r {{path/to/file.deb}} {{path/to/debian-binary path/to/control.tar.gz path/to/data.tar.xz ...}}` * In[s]ert an object file index (equivalent to using `ranlib`): `ar s {{path/to/file.a}}` * Create an archive with specific files and an accompanying object file index: `ar rs {{path/to/file.a}} {{path/to/file1.o path/to/file2.o ...}}`
hostnamectl
hostnamectl may be used to query and change the system hostname and related settings. systemd-hostnamed.service(8) and this tool distinguish three different hostnames: the high-level "pretty" hostname which might include all kinds of special characters (e.g. "Lennart's Laptop"), the "static" hostname which is the user-configured hostname (e.g. "lennarts-laptop"), and the transient hostname which is a fallback value received from network configuration (e.g. "node12345678"). If a static hostname is set to a valid value, then the transient hostname is not used. Note that the pretty hostname has little restrictions on the characters and length used, while the static and transient hostnames are limited to the usually accepted characters of Internet domain names, and 64 characters at maximum (the latter being a Linux limitation). Use systemd-firstboot(1) to initialize the system hostname for mounted (but not booted) system images. The following options are understood: --no-ask-password Do not query the user for authentication for privileged operations. --static, --transient, --pretty If status is invoked (or no explicit command is given) and one of these switches is specified, hostnamectl will print out just this selected hostname. If used with hostname, only the selected hostnames will be updated. When more than one of these switches are specified, all the specified hostnames will be updated. -H, --host= Execute the operation remotely. Specify a hostname, or a username and hostname separated by "@", to connect to. The hostname may optionally be suffixed by a port ssh is listening on, separated by ":", and then a container name, separated by "/", which connects directly to a specific container on the specified host. This will use SSH to talk to the remote machine manager instance. Container names may be enumerated with machinectl -H HOST. Put IPv6 addresses in brackets. -M, --machine= Execute operation on a local container. Specify a container name to connect to, optionally prefixed by a user name to connect as and a separating "@" character. If the special string ".host" is used in place of the container name, a connection to the local system is made (which is useful to connect to a specific user's user bus: "--user [email protected]"). If the "@" syntax is not used, the connection is made as root user. If the "@" syntax is used either the left hand side or the right hand side may be omitted (but not both) in which case the local user name and ".host" are implied. -h, --help Print a short help text and exit. --version Print a short version string and exit. --json=MODE Shows output formatted as JSON. Expects one of "short" (for the shortest possible output without any redundant whitespace or line breaks), "pretty" (for a pretty version of the same, with indentation and line breaks) or "off" (to turn off JSON output, the default).
# hostnamectl > Get or set the hostname of the computer. More information: > https://manned.org/hostnamectl. * Get the hostname of the computer: `hostnamectl` * Set the hostname of the computer: `sudo hostnamectl set-hostname "{{hostname}}"` * Set a pretty hostname for the computer: `sudo hostnamectl set-hostname --static "{{hostname.example.com}}" && sudo hostnamectl set-hostname --pretty "{{hostname}}"` * Reset hostname to its default value: `sudo hostnamectl set-hostname --pretty ""`
split
The split utility shall read an input file and write zero or more output files. The default size of each output file shall be 1000 lines. The size of the output files can be modified by specification of the -b or -l options. Each output file shall be created with a unique suffix. The suffix shall consist of exactly suffix_length lowercase letters from the POSIX locale. The letters of the suffix shall be used as if they were a base-26 digit system, with the first suffix to be created consisting of all 'a' characters, the second with a 'b' replacing the last 'a', and so on, until a name of all 'z' characters is created. By default, the names of the output files shall be 'x', followed by a two-character suffix from the character set as described above, starting with "aa", "ab", "ac", and so on, and continuing until the suffix "zz", for a maximum of 676 files. If the number of files required exceeds the maximum allowed by the suffix length provided, such that the last allowable file would be larger than the requested size, the split utility shall fail after creating the last file with a valid suffix; split shall not delete the files it created with valid suffixes. If the file limit is not exceeded, the last file created shall contain the remainder of the input file, and may be smaller than the requested size. If the input is an empty file, no output file shall be created and this shall not be considered to be an error. The split utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -a suffix_length Use suffix_length letters to form the suffix portion of the filenames of the split file. If -a is not specified, the default suffix length shall be two. If the sum of the name operand and the suffix_length option-argument would create a filename exceeding {NAME_MAX} bytes, an error shall result; split shall exit with a diagnostic message and no files shall be created. -b n Split a file into pieces n bytes in size. -b nk Split a file into pieces n*1024 bytes in size. -b nm Split a file into pieces n*1048576 bytes in size. -l line_count Specify the number of lines in each resulting file piece. The line_count argument is an unsigned decimal integer. The default is 1000. If the input does not end with a <newline>, the partial line shall be included in the last output file.
# split > Split a file into pieces. More information: https://ss64.com/osx/split.html. * Split a file, each split having 10 lines (except the last split): `split -l {{10}} {{filename}}` * Split a file by a regular expression. The matching line will be the first line of the next output file: `split -p {{cat|^[dh]og}} {{filename}}` * Split a file with 512 bytes in each split (except the last split; use 512k for kilobytes and 512m for megabytes): `split -b {{512}} {{filename}}` * Split a file into 5 files. File is split such that each split has same size (except the last split): `split -n {{5}} {{filename}}`
sftp
sftp is a file transfer program, similar to ftp(1), which performs all operations over an encrypted ssh(1) transport. It may also use many features of ssh, such as public key authentication and compression. The destination may be specified either as [user@]host[:path] or as a URI in the form sftp://[user@]host[:port][/path]. If the destination includes a path and it is not a directory, sftp will retrieve files automatically if a non-interactive authentication method is used; otherwise it will do so after successful interactive authentication. If no path is specified, or if the path is a directory, sftp will log in to the specified host and enter interactive command mode, changing to the remote directory if one was specified. An optional trailing slash can be used to force the path to be interpreted as a directory. Since the destination formats use colon characters to delimit host names from path names or port numbers, IPv6 addresses must be enclosed in square brackets to avoid ambiguity. The options are as follows: -4 Forces sftp to use IPv4 addresses only. -6 Forces sftp to use IPv6 addresses only. -A Allows forwarding of ssh-agent(1) to the remote system. The default is not to forward an authentication agent. -a Attempt to continue interrupted transfers rather than overwriting existing partial or complete copies of files. If the partial contents differ from those being transferred, then the resultant file is likely to be corrupt. -B buffer_size Specify the size of the buffer that sftp uses when transferring files. Larger buffers require fewer round trips at the cost of higher memory consumption. The default is 32768 bytes. -b batchfile Batch mode reads a series of commands from an input batchfile instead of stdin. Since it lacks user interaction, it should be used in conjunction with non- interactive authentication to obviate the need to enter a password at connection time (see sshd(8) and ssh-keygen(1) for details). A batchfile of ‘-’ may be used to indicate standard input. sftp will abort if any of the following commands fail: get, put, reget, reput, rename, ln, rm, mkdir, chdir, ls, lchdir, copy, cp, chmod, chown, chgrp, lpwd, df, symlink, and lmkdir. Termination on error can be suppressed on a command by command basis by prefixing the command with a ‘-’ character (for example, -rm /tmp/blah*). Echo of the command may be suppressed by prefixing the command with a ‘@’ character. These two prefixes may be combined in any order, for example -@ls /bsd. -C Enables compression (via ssh's -C flag). -c cipher Selects the cipher to use for encrypting the data transfers. This option is directly passed to ssh(1). -D sftp_server_command Connect directly to a local sftp server (rather than via ssh(1)). A command and arguments may be specified, for example "/path/sftp-server -el debug3". This option may be useful in debugging the client and server. -F ssh_config Specifies an alternative per-user configuration file for ssh(1). This option is directly passed to ssh(1). -f Requests that files be flushed to disk immediately after transfer. When uploading files, this feature is only enabled if the server implements the "[email protected]" extension. -i identity_file Selects the file from which the identity (private key) for public key authentication is read. This option is directly passed to ssh(1). -J destination Connect to the target host by first making an sftp connection to the jump host described by destination and then establishing a TCP forwarding to the ultimate destination from there. Multiple jump hops may be specified separated by comma characters. This is a shortcut to specify a ProxyJump configuration directive. This option is directly passed to ssh(1). -l limit Limits the used bandwidth, specified in Kbit/s. -N Disables quiet mode, e.g. to override the implicit quiet mode set by the -b flag. -o ssh_option Can be used to pass options to ssh in the format used in ssh_config(5). This is useful for specifying options for which there is no separate sftp command-line flag. For example, to specify an alternate port use: sftp -oPort=24. For full details of the options listed below, and their possible values, see ssh_config(5). AddressFamily BatchMode BindAddress BindInterface CanonicalDomains CanonicalizeFallbackLocal CanonicalizeHostname CanonicalizeMaxDots CanonicalizePermittedCNAMEs CASignatureAlgorithms CertificateFile CheckHostIP Ciphers Compression ConnectionAttempts ConnectTimeout ControlMaster ControlPath ControlPersist GlobalKnownHostsFile GSSAPIAuthentication GSSAPIDelegateCredentials HashKnownHosts Host HostbasedAcceptedAlgorithms HostbasedAuthentication HostKeyAlgorithms HostKeyAlias Hostname IdentitiesOnly IdentityAgent IdentityFile IPQoS KbdInteractiveAuthentication KbdInteractiveDevices KexAlgorithms KnownHostsCommand LogLevel MACs NoHostAuthenticationForLocalhost NumberOfPasswordPrompts PasswordAuthentication PKCS11Provider Port PreferredAuthentications ProxyCommand ProxyJump PubkeyAcceptedAlgorithms PubkeyAuthentication RekeyLimit RequiredRSASize SendEnv ServerAliveInterval ServerAliveCountMax SetEnv StrictHostKeyChecking TCPKeepAlive UpdateHostKeys User UserKnownHostsFile VerifyHostKeyDNS -P port Specifies the port to connect to on the remote host. -p Preserves modification times, access times, and modes from the original files transferred. -q Quiet mode: disables the progress meter as well as warning and diagnostic messages from ssh(1). -R num_requests Specify how many requests may be outstanding at any one time. Increasing this may slightly improve file transfer speed but will increase memory usage. The default is 64 outstanding requests. -r Recursively copy entire directories when uploading and downloading. Note that sftp does not follow symbolic links encountered in the tree traversal. -S program Name of the program to use for the encrypted connection. The program must understand ssh(1) options. -s subsystem | sftp_server Specifies the SSH2 subsystem or the path for an sftp server on the remote host. A path is useful when the remote sshd(8) does not have an sftp subsystem configured. -v Raise logging level. This option is also passed to ssh. -X sftp_option Specify an option that controls aspects of SFTP protocol behaviour. The valid options are: nrequests=value Controls how many concurrent SFTP read or write requests may be in progress at any point in time during a download or upload. By default 64 requests may be active concurrently. buffer=value Controls the maximum buffer size for a single SFTP read/write operation used during download or upload. By default a 32KB buffer is used.
# sftp > Secure File Transfer Program. Interactive program to copy files between > hosts over SSH. For non-interactive file transfers, see `scp` or `rsync`. > More information: https://manned.org/sftp. * Connect to a remote server and enter an interactive command mode: `sftp {{remote_user}}@{{remote_host}}` * Connect using an alternate port: `sftp -P {{remote_port}} {{remote_user}}@{{remote_host}}` * Connect using a predefined host (in `~/.ssh/config`): `sftp {{host}}` * Transfer remote file to the local system: `get {{/path/remote_file}}` * Transfer local file to the remote system: `put {{/path/local_file}}` * Transfer remote directory to the local system recursively (works with `put` too): `get -R {{/path/remote_directory}}` * Get list of files on local machine: `lls` * Get list of files on remote machine: `ls`
renice
The renice utility shall request that the nice values (see the Base Definitions volume of POSIX.1‐2017, Section 3.244, Nice Value) of one or more running processes be changed. By default, the applicable processes are specified by their process IDs. When a process group is specified (see -g), the request shall apply to all processes in the process group. The nice value shall be bounded in an implementation-defined manner. If the requested increment would raise or lower the nice value of the executed utility beyond implementation-defined limits, then the limit whose value was exceeded shall be used. When a user is reniced, the request applies to all processes whose saved set-user-ID matches the user ID corresponding to the user. Regardless of which options are supplied or any other factor, renice shall not alter the nice values of any process unless the user requesting such a change has appropriate privileges to do so for the specified process. If the user lacks appropriate privileges to perform the requested action, the utility shall return an error status. The saved set-user-ID of the user's process shall be checked instead of its effective user ID when renice attempts to determine the user ID of the process in order to determine whether the user has appropriate privileges. The renice utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines, except for Guideline 9. The following options shall be supported: -g Interpret the following operands as unsigned decimal integer process group IDs. -n increment Specify how the nice value of the specified process or processes is to be adjusted. The increment option- argument is a positive or negative decimal integer that shall be used to modify the nice value of the specified process or processes. Positive increment values shall cause a lower nice value. Negative increment values may require appropriate privileges and shall cause a higher nice value. -p Interpret the following operands as unsigned decimal integer process IDs. The -p option is the default if no options are specified. -u Interpret the following operands as users. If a user exists with a user name equal to the operand, then the user ID of that user is used in further processing. Otherwise, if the operand represents an unsigned decimal integer, it shall be used as the numeric user ID of the user.
# renice > Alters the scheduling priority/niceness of one or more running processes. > Niceness values range from -20 (most favorable to the process) to 19 (least > favorable to the process). More information: https://manned.org/renice. * Change priority of a running process: `renice -n {{niceness_value}} -p {{pid}}` * Change priority of all processes owned by a user: `renice -n {{niceness_value}} -u {{user}}` * Change priority of all processes that belong to a process group: `renice -n {{niceness_value}} --pgrp {{process_group}}`
envsubst
Substitutes the values of environment variables. Operation mode: -v, --variables output the variables occurring in SHELL-FORMAT Informative output: -h, --help display this help and exit -V, --version output version information and exit In normal operation mode, standard input is copied to standard output, with references to environment variables of the form $VARIABLE or ${VARIABLE} being replaced with the corresponding values. If a SHELL-FORMAT is given, only those environment variables that are referenced in SHELL-FORMAT are substituted; otherwise all environment variables references occurring in standard input are substituted. When --variables is used, standard input is ignored, and the output consists of the environment variables that are referenced in SHELL-FORMAT, one per line.
# envsubst > Substitutes environment variables with their value in shell format strings. > Variables to be replaced should be in either `${var}` or `$var` format. More > information: https://www.gnu.org/software/gettext/manual/html_node/envsubst- > Invocation.html. * Replace environment variables in `stdin` and output to `stdout`: `echo '{{$HOME}}' | envsubst` * Replace environment variables in an input file and output to `stdout`: `envsubst < {{path/to/input_file}}` * Replace environment variables in an input file and output to a file: `envsubst < {{path/to/input_file}} > {{path/to/output_file}}` * Replace environment variables in an input file from a space-separated list: `envsubst '{{$USER $SHELL $HOME}}' < {{path/to/input_file}}`
comm
The comm utility shall read file1 and file2, which should be ordered in the current collating sequence, and produce three text columns as output: lines only in file1, lines only in file2, and lines in both files. If the lines in both files are not ordered according to the collating sequence of the current locale, the results are unspecified. If the collating sequence of the current locale does not have a total ordering of all characters (see the Base Definitions volume of POSIX.1‐2017, Section 7.3.2, LC_COLLATE) and any lines from the input files collate equally but are not identical, comm should treat them as different lines but may treat them as being the same. If it treats them as different, comm should expect them to be ordered according to a further byte-by-byte comparison using the collating sequence for the POSIX locale and if they are not ordered in this way, the output of comm can identify such lines as being both unique to file1 and unique to file2 instead of being in both files. The comm utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -1 Suppress the output column of lines unique to file1. -2 Suppress the output column of lines unique to file2. -3 Suppress the output column of lines duplicated in file1 and file2.
# comm > Select or reject lines common to two files. Both files must be sorted. More > information: https://www.gnu.org/software/coreutils/comm. * Produce three tab-separated columns: lines only in first file, lines only in second file and common lines: `comm {{file1}} {{file2}}` * Print only lines common to both files: `comm -12 {{file1}} {{file2}}` * Print only lines common to both files, reading one file from `stdin`: `cat {{file1}} | comm -12 - {{file2}}` * Get lines only found in first file, saving the result to a third file: `comm -23 {{file1}} {{file2}} > {{file1_only}}` * Print lines only found in second file, when the files aren't sorted: `comm -13 <(sort {{file1}}) <(sort {{file2}})`
gdb
The purpose of a debugger such as GDB is to allow you to see what is going on "inside" another program while it executes -- or what another program was doing at the moment it crashed. GDB can do four main kinds of things (plus other things in support of these) to help you catch bugs in the act: • Start your program, specifying anything that might affect its behavior. • Make your program stop on specified conditions. • Examine what has happened, when your program has stopped. • Change things in your program, so you can experiment with correcting the effects of one bug and go on to learn about another. You can use GDB to debug programs written in C, C++, Fortran and Modula-2. GDB is invoked with the shell command "gdb". Once started, it reads commands from the terminal until you tell it to exit with the GDB command "quit" or "exit". You can get online help from GDB itself by using the command "help". You can run "gdb" with no arguments or options; but the most usual way to start GDB is with one argument or two, specifying an executable program as the argument: gdb program You can also start with both an executable program and a core file specified: gdb program core You can, instead, specify a process ID as a second argument or use option "-p", if you want to debug a running process: gdb program 1234 gdb -p 1234 would attach GDB to process 1234. With option -p you can omit the program filename. Here are some of the most frequently needed GDB commands: break [file:][function|line] Set a breakpoint at function or line (in file). run [arglist] Start your program (with arglist, if specified). bt Backtrace: display the program stack. print expr Display the value of an expression. c Continue running your program (after stopping, e.g. at a breakpoint). next Execute next program line (after stopping); step over any function calls in the line. edit [file:]function look at the program line where it is presently stopped. list [file:]function type the text of the program in the vicinity of where it is presently stopped. step Execute next program line (after stopping); step into any function calls in the line. help [name] Show information about GDB command name, or general information about using GDB. quit exit Exit from GDB. For full details on GDB, see Using GDB: A Guide to the GNU Source-Level Debugger, by Richard M. Stallman and Roland H. Pesch. The same text is available online as the "gdb" entry in the "info" program. Any arguments other than options specify an executable file and core file (or process ID); that is, the first argument encountered with no associated option flag is equivalent to a --se option, and the second, if any, is equivalent to a -c option if it's the name of a file. Many options have both long and abbreviated forms; both are shown here. The long forms are also recognized if you truncate them, so long as enough of the option is present to be unambiguous. The abbreviated forms are shown here with - and long forms are shown with -- to reflect how they are shown in --help. However, GDB recognizes all of the following conventions for most options: "--option=value" "--option value" "-option=value" "-option value" "--o=value" "--o value" "-o=value" "-o value" All the options and command line arguments you give are processed in sequential order. The order makes a difference when the -x option is used. --help -h List all options, with brief explanations. --symbols=file -s file Read symbol table from file. --write Enable writing into executable and core files. --exec=file -e file Use file as the executable file to execute when appropriate, and for examining pure data in conjunction with a core dump. --se=file Read symbol table from file and use it as the executable file. --core=file -c file Use file as a core dump to examine. --command=file -x file Execute GDB commands from file. --eval-command=command -ex command Execute given GDB command. --init-eval-command=command -iex Execute GDB command before loading the inferior. --directory=directory -d directory Add directory to the path to search for source files. --nh Do not execute commands from ~/.config/gdb/gdbinit, ~/.gdbinit, ~/.config/gdb/gdbearlyinit, or ~/.gdbearlyinit --nx -n Do not execute commands from any .gdbinit or .gdbearlyinit initialization files. --quiet --silent -q "Quiet". Do not print the introductory and copyright messages. These messages are also suppressed in batch mode. --batch Run in batch mode. Exit with status 0 after processing all the command files specified with -x (and .gdbinit, if not inhibited). Exit with nonzero status if an error occurs in executing the GDB commands in the command files. Batch mode may be useful for running GDB as a filter, for example to download and run a program on another computer; in order to make this more useful, the message Program exited normally. (which is ordinarily issued whenever a program running under GDB control terminates) is not issued when running in batch mode. --batch-silent Run in batch mode, just like --batch, but totally silent. All GDB output is supressed (stderr is unaffected). This is much quieter than --silent and would be useless for an interactive session. This is particularly useful when using targets that give Loading section messages, for example. Note that targets that give their output via GDB, as opposed to writing directly to "stdout", will also be made silent. --args prog [arglist] Change interpretation of command line so that arguments following this option are passed as arguments to the inferior. As an example, take the following command: gdb ./a.out -q It would start GDB with -q, not printing the introductory message. On the other hand, using: gdb --args ./a.out -q starts GDB with the introductory message, and passes the option to the inferior. --pid=pid Attach GDB to an already running program, with the PID pid. --tui Open the terminal user interface. --readnow Read all symbols from the given symfile on the first access. --readnever Do not read symbol files. --return-child-result GDB's exit code will be the same as the child's exit code. --configuration Print details about GDB configuration and then exit. --version Print version information and then exit. --cd=directory Run GDB using directory as its working directory, instead of the current directory. --data-directory=directory -D Run GDB using directory as its data directory. The data directory is where GDB searches for its auxiliary files. --fullname -f Emacs sets this option when it runs GDB as a subprocess. It tells GDB to output the full file name and line number in a standard, recognizable fashion each time a stack frame is displayed (which includes each time the program stops). This recognizable format looks like two \032 characters, followed by the file name, line number and character position separated by colons, and a newline. The Emacs-to-GDB interface program uses the two \032 characters as a signal to display the source code for the frame. -b baudrate Set the line speed (baud rate or bits per second) of any serial interface used by GDB for remote debugging. -l timeout Set timeout, in seconds, for remote debugging. --tty=device Run using device for your program's standard input and output.
# gdb > The GNU Debugger. More information: https://www.gnu.org/software/gdb. * Debug an executable: `gdb {{executable}}` * Attach a process to gdb: `gdb -p {{procID}}` * Debug with a core file: `gdb -c {{core}} {{executable}}` * Execute given GDB commands upon start: `gdb -ex "{{commands}}" {{executable}}` * Start `gdb` and pass arguments to the executable: `gdb --args {{executable}} {{argument1}} {{argument2}}`
git-prune
Note In most cases, users should run git gc, which calls git prune. See the section "NOTES", below. This runs git fsck --unreachable using all the refs available in refs/, optionally with additional set of objects specified on the command line, and prunes all unpacked objects unreachable from any of these head objects from the object database. In addition, it prunes the unpacked objects that are also found in packs by running git prune-packed. It also removes entries from .git/shallow that are not reachable by any ref. Note that unreachable, packed objects will remain. If this is not desired, see git-repack(1). -n, --dry-run Do not remove anything; just report what it would remove. -v, --verbose Report all removed objects. --progress Show progress. --expire <time> Only expire loose objects older than <time>. -- Do not interpret any more arguments as options. <head>... In addition to objects reachable from any of our references, keep objects reachable from listed <head>s.
# git prune > Git command for pruning all unreachable objects from the object database. > This command is often not used directly but as an internal command that is > used by Git gc. More information: https://git-scm.com/docs/git-prune. * Report what would be removed by Git prune without removing it: `git prune --dry-run` * Prune unreachable objects and display what has been pruned to `stdout`: `git prune --verbose` * Prune unreachable objects while showing progress: `git prune --progress`
oomctl
oomctl may be used to get information about the various contexts read in by the systemd(1) userspace out-of-memory (OOM) killer, systemd-oomd(8). The following options are understood: -h, --help Print a short help text and exit. --version Print a short version string and exit. --no-pager Do not pipe output into a pager.
# oomctl > Analyze the state stored in `systemd-oomd`. More information: > https://www.freedesktop.org/software/systemd/man/oomctl.html. * Show the current state of the cgroups and system contexts stored by `systemd-oomd`: `oomctl dump`
git-config
You can query/set/replace/unset options with this command. The name is actually the section and the key separated by a dot, and the value will be escaped. Multiple lines can be added to an option by using the --add option. If you want to update or unset an option which can occur on multiple lines, a value-pattern (which is an extended regular expression, unless the --fixed-value option is given) needs to be given. Only the existing values that match the pattern are updated or unset. If you want to handle the lines that do not match the pattern, just prepend a single exclamation mark in front (see also the section called “EXAMPLES”), but note that this only works when the --fixed-value option is not in use. The --type=<type> option instructs git config to ensure that incoming and outgoing values are canonicalize-able under the given <type>. If no --type=<type> is given, no canonicalization will be performed. Callers may unset an existing --type specifier with --no-type. When reading, the values are read from the system, global and repository local configuration files by default, and options --system, --global, --local, --worktree and --file <filename> can be used to tell the command to read from only that location (see the section called “FILES”). When writing, the new value is written to the repository local configuration file by default, and options --system, --global, --worktree, --file <filename> can be used to tell the command to write to that location (you can say --local but that is the default). This command will fail with non-zero status upon error. Some exit codes are: • The section or key is invalid (ret=1), • no section or name was provided (ret=2), • the config file is invalid (ret=3), • the config file cannot be written (ret=4), • you try to unset an option which does not exist (ret=5), • you try to unset/set an option for which multiple lines match (ret=5), or • you try to use an invalid regexp (ret=6). On success, the command returns the exit code 0. A list of all available configuration variables can be obtained using the git help --config command. --replace-all Default behavior is to replace at most one line. This replaces all lines matching the key (and optionally the value-pattern). --add Adds a new line to the option without altering any existing values. This is the same as providing ^$ as the value-pattern in --replace-all. --get Get the value for a given key (optionally filtered by a regex matching the value). Returns error code 1 if the key was not found and the last value if multiple key values were found. --get-all Like get, but returns all values for a multi-valued key. --get-regexp Like --get-all, but interprets the name as a regular expression and writes out the key names. Regular expression matching is currently case-sensitive and done against a canonicalized version of the key in which section and variable names are lowercased, but subsection names are not. --get-urlmatch <name> <URL> When given a two-part name section.key, the value for section.<URL>.key whose <URL> part matches the best to the given URL is returned (if no such key exists, the value for section.key is used as a fallback). When given just the section as name, do so for all the keys in the section and list them. Returns error code 1 if no value is found. --global For writing options: write to global ~/.gitconfig file rather than the repository .git/config, write to $XDG_CONFIG_HOME/git/config file if this file exists and the ~/.gitconfig file doesn’t. For reading options: read only from global ~/.gitconfig and from $XDG_CONFIG_HOME/git/config rather than from all available files. See also the section called “FILES”. --system For writing options: write to system-wide $(prefix)/etc/gitconfig rather than the repository .git/config. For reading options: read only from system-wide $(prefix)/etc/gitconfig rather than from all available files. See also the section called “FILES”. --local For writing options: write to the repository .git/config file. This is the default behavior. For reading options: read only from the repository .git/config rather than from all available files. See also the section called “FILES”. --worktree Similar to --local except that $GIT_DIR/config.worktree is read from or written to if extensions.worktreeConfig is enabled. If not it’s the same as --local. Note that $GIT_DIR is equal to $GIT_COMMON_DIR for the main working tree, but is of the form $GIT_DIR/worktrees/<id>/ for other working trees. See git-worktree(1) to learn how to enable extensions.worktreeConfig. -f <config-file>, --file <config-file> For writing options: write to the specified file rather than the repository .git/config. For reading options: read only from the specified file rather than from all available files. See also the section called “FILES”. --blob <blob> Similar to --file but use the given blob instead of a file. E.g. you can use master:.gitmodules to read values from the file .gitmodules in the master branch. See "SPECIFYING REVISIONS" section in gitrevisions(7) for a more complete list of ways to spell blob names. --remove-section Remove the given section from the configuration file. --rename-section Rename the given section to a new name. --unset Remove the line matching the key from config file. --unset-all Remove all lines matching the key from config file. -l, --list List all variables set in config file, along with their values. --fixed-value When used with the value-pattern argument, treat value-pattern as an exact string instead of a regular expression. This will restrict the name/value pairs that are matched to only those where the value is exactly equal to the value-pattern. --type <type> git config will ensure that any input or output is valid under the given type constraint(s), and will canonicalize outgoing values in <type>'s canonical form. Valid <type>'s include: • bool: canonicalize values as either "true" or "false". • int: canonicalize values as simple decimal numbers. An optional suffix of k, m, or g will cause the value to be multiplied by 1024, 1048576, or 1073741824 upon input. • bool-or-int: canonicalize according to either bool or int, as described above. • path: canonicalize by adding a leading ~ to the value of $HOME and ~user to the home directory for the specified user. This specifier has no effect when setting the value (but you can use git config section.variable ~/ from the command line to let your shell do the expansion.) • expiry-date: canonicalize by converting from a fixed or relative date-string to a timestamp. This specifier has no effect when setting the value. • color: When getting a value, canonicalize by converting to an ANSI color escape sequence. When setting a value, a sanity-check is performed to ensure that the given value is canonicalize-able as an ANSI color, but it is written as-is. --bool, --int, --bool-or-int, --path, --expiry-date Historical options for selecting a type specifier. Prefer instead --type (see above). --no-type Un-sets the previously set type specifier (if one was previously set). This option requests that git config not canonicalize the retrieved variable. --no-type has no effect without --type=<type> or --<type>. -z, --null For all options that output values and/or keys, always end values with the null character (instead of a newline). Use newline instead as a delimiter between key and value. This allows for secure parsing of the output without getting confused e.g. by values that contain line breaks. --name-only Output only the names of config variables for --list or --get-regexp. --show-origin Augment the output of all queried config options with the origin type (file, standard input, blob, command line) and the actual origin (config file path, ref, or blob id if applicable). --show-scope Similar to --show-origin in that it augments the output of all queried config options with the scope of that value (worktree, local, global, system, command). --get-colorbool <name> [<stdout-is-tty>] Find the color setting for <name> (e.g. color.diff) and output "true" or "false". <stdout-is-tty> should be either "true" or "false", and is taken into account when configuration says "auto". If <stdout-is-tty> is missing, then checks the standard output of the command itself, and exits with status 0 if color is to be used, or exits with status 1 otherwise. When the color setting for name is undefined, the command uses color.ui as fallback. --get-color <name> [<default>] Find the color configured for name (e.g. color.diff.new) and output it as the ANSI color escape sequence to the standard output. The optional default parameter is used instead, if there is no color configured for name. --type=color [--default=<default>] is preferred over --get-color (but note that --get-color will omit the trailing newline printed by --type=color). -e, --edit Opens an editor to modify the specified config file; either --system, --global, or repository (default). --[no-]includes Respect include.* directives in config files when looking up values. Defaults to off when a specific file is given (e.g., using --file, --global, etc) and on when searching all config files. --default <value> When using --get, and the requested variable is not found, behave as if <value> were the value assigned to the that variable.
# git config > Manage custom configuration options for Git repositories. These > configurations can be local (for the current repository) or global (for the > current user). More information: https://git-scm.com/docs/git-config. * List only local configuration entries (stored in `.git/config` in the current repository): `git config --list --local` * List only global configuration entries (stored in `~/.gitconfig` by default or in `$XDG_CONFIG_HOME/git/config` if such a file exists): `git config --list --global` * List only system configuration entries (stored in `/etc/gitconfig`), and show their file location: `git config --list --system --show-origin` * Get the value of a given configuration entry: `git config alias.unstage` * Set the global value of a given configuration entry: `git config --global alias.unstage "reset HEAD --"` * Revert a global configuration entry to its default value: `git config --global --unset alias.unstage` * Edit the Git configuration for the current repository in the default editor: `git config --edit` * Edit the global Git configuration in the default editor: `git config --global --edit`
git-merge-base
git merge-base finds best common ancestor(s) between two commits to use in a three-way merge. One common ancestor is better than another common ancestor if the latter is an ancestor of the former. A common ancestor that does not have any better common ancestor is a best common ancestor, i.e. a merge base. Note that there can be more than one merge base for a pair of commits. -a, --all Output all merge bases for the commits, instead of just one.
# git merge-base > Find a common ancestor of two commits. More information: https://git- > scm.com/docs/git-merge-base. * Print the best common ancestor of two commits: `git merge-base {{commit_1}} {{commit_2}}` * Output all best common ancestors of two commits: `git merge-base --all {{commit_1}} {{commit_2}}` * Check if a commit is an ancestor of a specific commit: `git merge-base --is-ancestor {{ancestor_commit}} {{commit}}`
pwd
The pwd utility shall write to standard output an absolute pathname of the current working directory, which does not contain the filenames dot or dot-dot. The pwd utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported by the implementation: -L If the PWD environment variable contains an absolute pathname of the current directory and the pathname does not contain any components that are dot or dot-dot, pwd shall write this pathname to standard output, except that if the PWD environment variable is longer than {PATH_MAX} bytes including the terminating null, it is unspecified whether pwd writes this pathname to standard output or behaves as if the -P option had been specified. Otherwise, the -L option shall behave as the -P option. -P The pathname written to standard output shall not contain any components that refer to files of type symbolic link. If there are multiple pathnames that the pwd utility could write to standard output, one beginning with a single <slash> character and one or more beginning with two <slash> characters, then it shall write the pathname beginning with a single <slash> character. The pathname shall not contain any unnecessary <slash> characters after the leading one or two <slash> characters. If both -L and -P are specified, the last one shall apply. If neither -L nor -P is specified, the pwd utility shall behave as if -L had been specified.
# pwd > Print name of current/working directory. More information: > https://www.gnu.org/software/coreutils/pwd. * Print the current directory: `pwd` * Print the current directory, and resolve all symlinks (i.e. show the "physical" path): `pwd -P`
git-unpack-file
Creates a file holding the contents of the blob specified by sha1. It returns the name of the temporary file in the following format: .merge_file_XXXXX <blob> Must be a blob id
# git unpack-file > Create a temporary file with a blob's contents. More information: > https://git-scm.com/docs/git-unpack-file. * Create a file holding the contents of the blob specified by its ID then print the name of the temporary file: `git unpack-file {{blob_id}}`
git-fsck
Verifies the connectivity and validity of the objects in the database. <object> An object to treat as the head of an unreachability trace. If no objects are given, git fsck defaults to using the index file, all SHA-1 references in refs namespace, and all reflogs (unless --no-reflogs is given) as heads. --unreachable Print out objects that exist but that aren’t reachable from any of the reference nodes. --[no-]dangling Print objects that exist but that are never directly used (default). --no-dangling can be used to omit this information from the output. --root Report root nodes. --tags Report tags. --cache Consider any object recorded in the index also as a head node for an unreachability trace. --no-reflogs Do not consider commits that are referenced only by an entry in a reflog to be reachable. This option is meant only to search for commits that used to be in a ref, but now aren’t, but are still in that corresponding reflog. --full Check not just objects in GIT_OBJECT_DIRECTORY ($GIT_DIR/objects), but also the ones found in alternate object pools listed in GIT_ALTERNATE_OBJECT_DIRECTORIES or $GIT_DIR/objects/info/alternates, and in packed Git archives found in $GIT_DIR/objects/pack and corresponding pack subdirectories in alternate object pools. This is now default; you can turn it off with --no-full. --connectivity-only Check only the connectivity of reachable objects, making sure that any objects referenced by a reachable tag, commit, or tree is present. This speeds up the operation by avoiding reading blobs entirely (though it does still check that referenced blobs exist). This will detect corruption in commits and trees, but not do any semantic checks (e.g., for format errors). Corruption in blob objects will not be detected at all. Unreachable tags, commits, and trees will also be accessed to find the tips of dangling segments of history. Use --no-dangling if you don’t care about this output and want to speed it up further. --strict Enable more strict checking, namely to catch a file mode recorded with g+w bit set, which was created by older versions of Git. Existing repositories, including the Linux kernel, Git itself, and sparse repository have old objects that triggers this check, but it is recommended to check new projects with this flag. --verbose Be chatty. --lost-found Write dangling objects into .git/lost-found/commit/ or .git/lost-found/other/, depending on type. If the object is a blob, the contents are written into the file, rather than its object name. --name-objects When displaying names of reachable objects, in addition to the SHA-1 also display a name that describes how they are reachable, compatible with git-rev-parse(1), e.g. HEAD@{1234567890}~25^2:src/. --[no-]progress Progress status is reported on the standard error stream by default when it is attached to a terminal, unless --no-progress or --verbose is specified. --progress forces progress status even if the standard error stream is not directed to a terminal.
# git fsck > Verify the validity and connectivity of nodes in a Git repository index. > Does not make any modifications. See `git gc` for cleaning up dangling > blobs. More information: https://git-scm.com/docs/git-fsck. * Check the current repository: `git fsck` * List all tags found: `git fsck --tags` * List all root nodes found: `git fsck --root`
chgrp
The chgrp utility shall set the group ID of the file named by each file operand to the group ID specified by the group operand. For each file operand, or, if the -R option is used, each file encountered while walking the directory trees specified by the file operands, the chgrp utility shall perform actions equivalent to the chown() function defined in the System Interfaces volume of POSIX.1‐2017, called with the following arguments: * The file operand shall be used as the path argument. * The user ID of the file shall be used as the owner argument. * The specified group ID shall be used as the group argument. Unless chgrp is invoked by a process with appropriate privileges, the set-user-ID and set-group-ID bits of a regular file shall be cleared upon successful completion; the set-user-ID and set- group-ID bits of other file types may be cleared. The chgrp utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported by the implementation: -h For each file operand that names a file of type symbolic link, chgrp shall attempt to set the group ID of the symbolic link instead of the file referenced by the symbolic link. -H If the -R option is specified and a symbolic link referencing a file of type directory is specified on the command line, chgrp shall change the group of the directory referenced by the symbolic link and all files in the file hierarchy below it. -L If the -R option is specified and a symbolic link referencing a file of type directory is specified on the command line or encountered during the traversal of a file hierarchy, chgrp shall change the group of the directory referenced by the symbolic link and all files in the file hierarchy below it. -P If the -R option is specified and a symbolic link is specified on the command line or encountered during the traversal of a file hierarchy, chgrp shall change the group ID of the symbolic link. The chgrp utility shall not follow the symbolic link to any other part of the file hierarchy. -R Recursively change file group IDs. For each file operand that names a directory, chgrp shall change the group of the directory and all files in the file hierarchy below it. Unless a -H, -L, or -P option is specified, it is unspecified which of these options will be used as the default. Specifying more than one of the mutually-exclusive options -H, -L, and -P shall not be considered an error. The last option specified shall determine the behavior of the utility.
# chgrp > Change group ownership of files and directories. More information: > https://www.gnu.org/software/coreutils/chgrp. * Change the owner group of a file/directory: `chgrp {{group}} {{path/to/file_or_directory}}` * Recursively change the owner group of a directory and its contents: `chgrp -R {{group}} {{path/to/directory}}` * Change the owner group of a symbolic link: `chgrp -h {{group}} {{path/to/symlink}}` * Change the owner group of a file/directory to match a reference file: `chgrp --reference={{path/to/reference_file}} {{path/to/file_or_directory}}`
free
free displays the total amount of free and used physical and swap memory in the system, as well as the buffers and caches used by the kernel. The information is gathered by parsing /proc/meminfo. The displayed columns are: total Total usable memory (MemTotal and SwapTotal in /proc/meminfo). This includes the physical and swap memory minus a few reserved bits and kernel binary code. used Used or unavailable memory (calculated as total - available) free Unused memory (MemFree and SwapFree in /proc/meminfo) shared Memory used (mostly) by tmpfs (Shmem in /proc/meminfo) buffers Memory used by kernel buffers (Buffers in /proc/meminfo) cache Memory used by the page cache and slabs (Cached and SReclaimable in /proc/meminfo) buff/cache Sum of buffers and cache available Estimation of how much memory is available for starting new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account page cache and also that not all reclaimable memory slabs will be reclaimed due to items being in use (MemAvailable in /proc/meminfo, available on kernels 3.14, emulated on kernels 2.6.27+, otherwise the same as free) -b, --bytes Display the amount of memory in bytes. -k, --kibi Display the amount of memory in kibibytes. This is the default. -m, --mebi Display the amount of memory in mebibytes. -g, --gibi Display the amount of memory in gibibytes. --tebi Display the amount of memory in tebibytes. --pebi Display the amount of memory in pebibytes. --kilo Display the amount of memory in kilobytes. Implies --si. --mega Display the amount of memory in megabytes. Implies --si. --giga Display the amount of memory in gigabytes. Implies --si. --tera Display the amount of memory in terabytes. Implies --si. --peta Display the amount of memory in petabytes. Implies --si. -h, --human Show all output fields automatically scaled to shortest three digit unit and display the units of print out. Following units are used. B = bytes Ki = kibibyte Mi = mebibyte Gi = gibibyte Ti = tebibyte Pi = pebibyte If unit is missing, and you have exbibyte of RAM or swap, the number is in tebibytes and columns might not be aligned with header. -w, --wide Switch to the wide mode. The wide mode produces lines longer than 80 characters. In this mode buffers and cache are reported in two separate columns. -c, --count count Display the result count times. Requires the -s option. -l, --lohi Show detailed low and high memory statistics. -L, --line Show output on a single line, often used with the -s option to show memory statistics repeatedly. -s, --seconds delay Continuously display the result delay seconds apart. You may actually specify any floating point number for delay using either . or , for decimal point. usleep(3) is used for microsecond resolution delay times. --si Use kilo, mega, giga etc (power of 1000) instead of kibi, mebi, gibi (power of 1024). -t, --total Display a line showing the column totals. -v, --committed Display a line showing the memory commit limit and amount of committed/uncommitted memory. The total column on this line will display the memory commit limit. This line is relevant if memory overcommit is disabled. --help Print help. -V, --version Display version information.
# free > Display amount of free and used memory in the system. More information: > https://manned.org/free. * Display system memory: `free` * Display memory in Bytes/KB/MB/GB: `free -{{b|k|m|g}}` * Display memory in human-readable units: `free -h` * Refresh the output every 2 seconds: `free -s {{2}}`
id
If no user operand is provided, the id utility shall write the user and group IDs and the corresponding user and group names of the invoking process to standard output. If the effective and real IDs do not match, both shall be written. If multiple groups are supported by the underlying system (see the description of {NGROUPS_MAX} in the System Interfaces volume of POSIX.1‐2017), the supplementary group affiliations of the invoking process shall also be written. If a user operand is provided and the process has appropriate privileges, the user and group IDs of the selected user shall be written. In this case, effective IDs shall be assumed to be identical to real IDs. If the selected user has more than one allowable group membership listed in the group database, these shall be written in the same manner as the supplementary groups described in the preceding paragraph. The id utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -G Output all different group IDs (effective, real, and supplementary) only, using the format "%u\n". If there is more than one distinct group affiliation, output each such affiliation, using the format " %u", before the <newline> is output. -g Output only the effective group ID, using the format "%u\n". -n Output the name in the format "%s" instead of the numeric ID using the format "%u". -r Output the real ID instead of the effective ID. -u Output only the effective user ID, using the format "%u\n".
# id > Display current user and group identity. More information: > https://www.gnu.org/software/coreutils/id. * Display current user's ID (UID), group ID (GID) and groups to which they belong: `id` * Display the current user identity as a number: `id -u` * Display the current group identity as a number: `id -g` * Display an arbitrary user's ID (UID), group ID (GID) and groups to which they belong: `id {{username}}`
readelf
readelf displays information about one or more ELF format object files. The options control what particular information to display. elffile... are the object files to be examined. 32-bit and 64-bit ELF files are supported, as are archives containing ELF files. This program performs a similar function to objdump but it goes into more detail and it exists independently of the BFD library, so if there is a bug in BFD then readelf will not be affected. The long and short forms of options, shown here as alternatives, are equivalent. At least one option besides -v or -H must be given. -a --all Equivalent to specifying --file-header, --program-headers, --sections, --symbols, --relocs, --dynamic, --notes, --version-info, --arch-specific, --unwind, --section-groups and --histogram. Note - this option does not enable --use-dynamic itself, so if that option is not present on the command line then dynamic symbols and dynamic relocs will not be displayed. -h --file-header Displays the information contained in the ELF header at the start of the file. -l --program-headers --segments Displays the information contained in the file's segment headers, if it has any. --quiet Suppress "no symbols" diagnostic. -S --sections --section-headers Displays the information contained in the file's section headers, if it has any. -g --section-groups Displays the information contained in the file's section groups, if it has any. -t --section-details Displays the detailed section information. Implies -S. -s --symbols --syms Displays the entries in symbol table section of the file, if it has one. If a symbol has version information associated with it then this is displayed as well. The version string is displayed as a suffix to the symbol name, preceded by an @ character. For example foo@VER_1. If the version is the default version to be used when resolving unversioned references to the symbol then it is displayed as a suffix preceded by two @ characters. For example foo@@VER_2. --dyn-syms Displays the entries in dynamic symbol table section of the file, if it has one. The output format is the same as the format used by the --syms option. --lto-syms Displays the contents of any LTO symbol tables in the file. --sym-base=[0|8|10|16] Forces the size field of the symbol table to use the given base. Any unrecognized options will be treated as 0. --sym-base=0 represents the default and legacy behaviour. This will output sizes as decimal for numbers less than 100000. For sizes 100000 and greater hexadecimal notation will be used with a 0x prefix. --sym-base=8 will give the symbol sizes in octal. --sym-base=10 will always give the symbol sizes in decimal. --sym-base=16 will always give the symbol sizes in hexadecimal with a 0x prefix. -C --demangle[=style] Decode (demangle) low-level symbol names into user-level names. This makes C++ function names readable. Different compilers have different mangling styles. The optional demangling style argument can be used to choose an appropriate demangling style for your compiler. --no-demangle Do not demangle low-level symbol names. This is the default. --recurse-limit --no-recurse-limit --recursion-limit --no-recursion-limit Enables or disables a limit on the amount of recursion performed whilst demangling strings. Since the name mangling formats allow for an infinite level of recursion it is possible to create strings whose decoding will exhaust the amount of stack space available on the host machine, triggering a memory fault. The limit tries to prevent this from happening by restricting recursion to 2048 levels of nesting. The default is for this limit to be enabled, but disabling it may be necessary in order to demangle truly complicated names. Note however that if the recursion limit is disabled then stack exhaustion is possible and any bug reports about such an event will be rejected. -U [d|i|l|e|x|h] --unicode=[default|invalid|locale|escape|hex|highlight] Controls the display of non-ASCII characters in identifier names. The default (--unicode=locale or --unicode=default) is to treat them as multibyte characters and display them in the current locale. All other versions of this option treat the bytes as UTF-8 encoded values and attempt to interpret them. If they cannot be interpreted or if the --unicode=invalid option is used then they are displayed as a sequence of hex bytes, encloses in curly parethesis characters. Using the --unicode=escape option will display the characters as as unicode escape sequences (\uxxxx). Using the --unicode=hex will display the characters as hex byte sequences enclosed between angle brackets. Using the --unicode=highlight will display the characters as unicode escape sequences but it will also highlighted them in red, assuming that colouring is supported by the output device. The colouring is intended to draw attention to the presence of unicode sequences when they might not be expected. -e --headers Display all the headers in the file. Equivalent to -h -l -S. -n --notes Displays the contents of the NOTE segments and/or sections, if any. -r --relocs Displays the contents of the file's relocation section, if it has one. -u --unwind Displays the contents of the file's unwind section, if it has one. Only the unwind sections for IA64 ELF files, as well as ARM unwind tables (".ARM.exidx" / ".ARM.extab") are currently supported. If support is not yet implemented for your architecture you could try dumping the contents of the .eh_frames section using the --debug-dump=frames or --debug-dump=frames-interp options. -d --dynamic Displays the contents of the file's dynamic section, if it has one. -V --version-info Displays the contents of the version sections in the file, it they exist. -A --arch-specific Displays architecture-specific information in the file, if there is any. -D --use-dynamic When displaying symbols, this option makes readelf use the symbol hash tables in the file's dynamic section, rather than the symbol table sections. When displaying relocations, this option makes readelf display the dynamic relocations rather than the static relocations. -L --lint --enable-checks Displays warning messages about possible problems with the file(s) being examined. If used on its own then all of the contents of the file(s) will be examined. If used with one of the dumping options then the warning messages will only be produced for the things being displayed. -x <number or name> --hex-dump=<number or name> Displays the contents of the indicated section as a hexadecimal bytes. A number identifies a particular section by index in the section table; any other string identifies all sections with that name in the object file. -R <number or name> --relocated-dump=<number or name> Displays the contents of the indicated section as a hexadecimal bytes. A number identifies a particular section by index in the section table; any other string identifies all sections with that name in the object file. The contents of the section will be relocated before they are displayed. -p <number or name> --string-dump=<number or name> Displays the contents of the indicated section as printable strings. A number identifies a particular section by index in the section table; any other string identifies all sections with that name in the object file. -z --decompress Requests that the section(s) being dumped by x, R or p options are decompressed before being displayed. If the section(s) are not compressed then they are displayed as is. -c --archive-index Displays the file symbol index information contained in the header part of binary archives. Performs the same function as the t command to ar, but without using the BFD library. -w[lLiaprmfFsOoRtUuTgAckK] --debug-dump[=rawline,=decodedline,=info,=abbrev,=pubnames,=aranges,=macro,=frames,=frames-interp,=str,=str-offsets,=loc,=Ranges,=pubtypes,=trace_info,=trace_abbrev,=trace_aranges,=gdb_index,=addr,=cu_index,=links,=follow-links] Displays the contents of the DWARF debug sections in the file, if any are present. Compressed debug sections are automatically decompressed (temporarily) before they are displayed. If one or more of the optional letters or words follows the switch then only those type(s) of data will be dumped. The letters and words refer to the following information: "a" "=abbrev" Displays the contents of the .debug_abbrev section. "A" "=addr" Displays the contents of the .debug_addr section. "c" "=cu_index" Displays the contents of the .debug_cu_index and/or .debug_tu_index sections. "f" "=frames" Display the raw contents of a .debug_frame section. "F" "=frames-interp" Display the interpreted contents of a .debug_frame section. "g" "=gdb_index" Displays the contents of the .gdb_index and/or .debug_names sections. "i" "=info" Displays the contents of the .debug_info section. Note: the output from this option can also be restricted by the use of the --dwarf-depth and --dwarf-start options. "k" "=links" Displays the contents of the .gnu_debuglink, .gnu_debugaltlink and .debug_sup sections, if any of them are present. Also displays any links to separate dwarf object files (dwo), if they are specified by the DW_AT_GNU_dwo_name or DW_AT_dwo_name attributes in the .debug_info section. "K" "=follow-links" Display the contents of any selected debug sections that are found in linked, separate debug info file(s). This can result in multiple versions of the same debug section being displayed if it exists in more than one file. In addition, when displaying DWARF attributes, if a form is found that references the separate debug info file, then the referenced contents will also be displayed. Note - in some distributions this option is enabled by default. It can be disabled via the N debug option. The default can be chosen when configuring the binutils via the --enable-follow-debug-links=yes or --enable-follow-debug-links=no options. If these are not used then the default is to enable the following of debug links. Note - if support for the debuginfod protocol was enabled when the binutils were built then this option will also include an attempt to contact any debuginfod servers mentioned in the DEBUGINFOD_URLS environment variable. This could take some time to resolve. This behaviour can be disabled via the =do-not-use-debuginfod debug option. "N" "=no-follow-links" Disables the following of links to separate debug info files. "D" "=use-debuginfod" Enables contacting debuginfod servers if there is a need to follow debug links. This is the default behaviour. "E" "=do-not-use-debuginfod" Disables contacting debuginfod servers when there is a need to follow debug links. "l" "=rawline" Displays the contents of the .debug_line section in a raw format. "L" "=decodedline" Displays the interpreted contents of the .debug_line section. "m" "=macro" Displays the contents of the .debug_macro and/or .debug_macinfo sections. "o" "=loc" Displays the contents of the .debug_loc and/or .debug_loclists sections. "O" "=str-offsets" Displays the contents of the .debug_str_offsets section. "p" "=pubnames" Displays the contents of the .debug_pubnames and/or .debug_gnu_pubnames sections. "r" "=aranges" Displays the contents of the .debug_aranges section. "R" "=Ranges" Displays the contents of the .debug_ranges and/or .debug_rnglists sections. "s" "=str" Displays the contents of the .debug_str, .debug_line_str and/or .debug_str_offsets sections. "t" "=pubtype" Displays the contents of the .debug_pubtypes and/or .debug_gnu_pubtypes sections. "T" "=trace_aranges" Displays the contents of the .trace_aranges section. "u" "=trace_abbrev" Displays the contents of the .trace_abbrev section. "U" "=trace_info" Displays the contents of the .trace_info section. Note: displaying the contents of .debug_static_funcs, .debug_static_vars and debug_weaknames sections is not currently supported. --dwarf-depth=n Limit the dump of the ".debug_info" section to n children. This is only useful with --debug-dump=info. The default is to print all DIEs; the special value 0 for n will also have this effect. With a non-zero value for n, DIEs at or deeper than n levels will not be printed. The range for n is zero-based. --dwarf-start=n Print only DIEs beginning with the DIE numbered n. This is only useful with --debug-dump=info. If specified, this option will suppress printing of any header information and all DIEs before the DIE numbered n. Only siblings and children of the specified DIE will be printed. This can be used in conjunction with --dwarf-depth. -P --process-links Display the contents of non-debug sections found in separate debuginfo files that are linked to the main file. This option automatically implies the -wK option, and only sections requested by other command line options will be displayed. --ctf[=section] Display the contents of the specified CTF section. CTF sections themselves contain many subsections, all of which are displayed in order. By default, display the name of the section named .ctf, which is the name emitted by ld. --ctf-parent=member If the CTF section contains ambiguously-defined types, it will consist of an archive of many CTF dictionaries, all inheriting from one dictionary containing unambiguous types. This member is by default named .ctf, like the section containing it, but it is possible to change this name using the "ctf_link_set_memb_name_changer" function at link time. When looking at CTF archives that have been created by a linker that uses the name changer to rename the parent archive member, --ctf-parent can be used to specify the name used for the parent. --ctf-symbols=section --ctf-strings=section Specify the name of another section from which the CTF file can inherit strings and symbols. By default, the ".symtab" and its linked string table are used. If either of --ctf-symbols or --ctf-strings is specified, the other must be specified as well. -I --histogram Display a histogram of bucket list lengths when displaying the contents of the symbol tables. -v --version Display the version number of readelf. -W --wide Don't break output lines to fit into 80 columns. By default readelf breaks section header and segment listing lines for 64-bit ELF files, so that they fit into 80 columns. This option causes readelf to print each section header resp. each segment one a single line, which is far more readable on terminals wider than 80 columns. -T --silent-truncation Normally when readelf is displaying a symbol name, and it has to truncate the name to fit into an 80 column display, it will add a suffix of "[...]" to the name. This command line option disables this behaviour, allowing 5 more characters of the name to be displayed and restoring the old behaviour of readelf (prior to release 2.35). -H --help Display the command-line options understood by readelf. @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively.
# readelf > Displays information about ELF files. More information: > http://man7.org/linux/man-pages/man1/readelf.1.html. * Display all information about the ELF file: `readelf -all {{path/to/binary}}` * Display all the headers present in the ELF file: `readelf --headers {{path/to/binary}}` * Display the entries in symbol table section of the ELF file, if it has one: `readelf --symbols {{path/to/binary}}` * Display the information contained in the ELF header at the start of the file: `readelf --file-header {{path/to/binary}}`
ld
ld combines a number of object and archive files, relocates their data and ties up symbol references. Usually the last step in compiling a program is to run ld. ld accepts Linker Command Language files written in a superset of AT&T's Link Editor Command Language syntax, to provide explicit and total control over the linking process. This man page does not describe the command language; see the ld entry in "info" for full details on the command language and on other aspects of the GNU linker. This version of ld uses the general purpose BFD libraries to operate on object files. This allows ld to read, combine, and write object files in many different formats---for example, COFF or "a.out". Different formats may be linked together to produce any available kind of object file. Aside from its flexibility, the GNU linker is more helpful than other linkers in providing diagnostic information. Many linkers abandon execution immediately upon encountering an error; whenever possible, ld continues executing, allowing you to identify other errors (or, in some cases, to get an output file in spite of the error). The GNU linker ld is meant to cover a broad range of situations, and to be as compatible as possible with other linkers. As a result, you have many choices to control its behavior. The linker supports a plethora of command-line options, but in actual practice few of them are used in any particular context. For instance, a frequent use of ld is to link standard Unix object files on a standard, supported Unix system. On such a system, to link a file "hello.o": ld -o <output> /lib/crt0.o hello.o -lc This tells ld to produce a file called output as the result of linking the file "/lib/crt0.o" with "hello.o" and the library "libc.a", which will come from the standard search directories. (See the discussion of the -l option below.) Some of the command-line options to ld may be specified at any point in the command line. However, options which refer to files, such as -l or -T, cause the file to be read at the point at which the option appears in the command line, relative to the object files and other file options. Repeating non-file options with a different argument will either have no further effect, or override prior occurrences (those further to the left on the command line) of that option. Options which may be meaningfully specified more than once are noted in the descriptions below. Non-option arguments are object files or archives which are to be linked together. They may follow, precede, or be mixed in with command-line options, except that an object file argument may not be placed between an option and its argument. Usually the linker is invoked with at least one object file, but you can specify other forms of binary input files using -l, -R, and the script command language. If no binary input files at all are specified, the linker does not produce any output, and issues the message No input files. If the linker cannot recognize the format of an object file, it will assume that it is a linker script. A script specified in this way augments the main linker script used for the link (either the default linker script or the one specified by using -T). This feature permits the linker to link against a file which appears to be an object or an archive, but actually merely defines some symbol values, or uses "INPUT" or "GROUP" to load other objects. Specifying a script in this way merely augments the main linker script, with the extra commands placed after the main script; use the -T option to replace the default linker script entirely, but note the effect of the "INSERT" command. For options whose names are a single letter, option arguments must either follow the option letter without intervening whitespace, or be given as separate arguments immediately following the option that requires them. For options whose names are multiple letters, either one dash or two can precede the option name; for example, -trace-symbol and --trace-symbol are equivalent. Note---there is one exception to this rule. Multiple letter options that start with a lower case 'o' can only be preceded by two dashes. This is to reduce confusion with the -o option. So for example -omagic sets the output file name to magic whereas --omagic sets the NMAGIC flag on the output. Arguments to multiple-letter options must either be separated from the option name by an equals sign, or be given as separate arguments immediately following the option that requires them. For example, --trace-symbol foo and --trace-symbol=foo are equivalent. Unique abbreviations of the names of multiple-letter options are accepted. Note---if the linker is being invoked indirectly, via a compiler driver (e.g. gcc) then all the linker command-line options should be prefixed by -Wl, (or whatever is appropriate for the particular compiler driver) like this: gcc -Wl,--start-group foo.o bar.o -Wl,--end-group This is important, because otherwise the compiler driver program may silently drop the linker options, resulting in a bad link. Confusion may also arise when passing options that require values through a driver, as the use of a space between option and argument acts as a separator, and causes the driver to pass only the option to the linker and the argument to the compiler. In this case, it is simplest to use the joined forms of both single- and multiple-letter options, such as: gcc foo.o bar.o -Wl,-eENTRY -Wl,-Map=a.map Here is a table of the generic command-line switches accepted by the GNU linker: @file Read command-line options from file. The options read are inserted in place of the original @file option. If file does not exist, or cannot be read, then the option will be treated literally, and not removed. Options in file are separated by whitespace. A whitespace character may be included in an option by surrounding the entire option in either single or double quotes. Any character (including a backslash) may be included by prefixing the character to be included with a backslash. The file may itself contain additional @file options; any such options will be processed recursively. -a keyword This option is supported for HP/UX compatibility. The keyword argument must be one of the strings archive, shared, or default. -aarchive is functionally equivalent to -Bstatic, and the other two keywords are functionally equivalent to -Bdynamic. This option may be used any number of times. --audit AUDITLIB Adds AUDITLIB to the "DT_AUDIT" entry of the dynamic section. AUDITLIB is not checked for existence, nor will it use the DT_SONAME specified in the library. If specified multiple times "DT_AUDIT" will contain a colon separated list of audit interfaces to use. If the linker finds an object with an audit entry while searching for shared libraries, it will add a corresponding "DT_DEPAUDIT" entry in the output file. This option is only meaningful on ELF platforms supporting the rtld-audit interface. -b input-format --format=input-format ld may be configured to support more than one kind of object file. If your ld is configured this way, you can use the -b option to specify the binary format for input object files that follow this option on the command line. Even when ld is configured to support alternative object formats, you don't usually need to specify this, as ld should be configured to expect as a default input format the most usual format on each machine. input-format is a text string, the name of a particular format supported by the BFD libraries. (You can list the available binary formats with objdump -i.) You may want to use this option if you are linking files with an unusual binary format. You can also use -b to switch formats explicitly (when linking object files of different formats), by including -b input-format before each group of object files in a particular format. The default format is taken from the environment variable "GNUTARGET". You can also define the input format from a script, using the command "TARGET"; -c MRI-commandfile --mri-script=MRI-commandfile For compatibility with linkers produced by MRI, ld accepts script files written in an alternate, restricted command language, described in the MRI Compatible Script Files section of GNU ld documentation. Introduce MRI script files with the option -c; use the -T option to run linker scripts written in the general-purpose ld scripting language. If MRI-cmdfile does not exist, ld looks for it in the directories specified by any -L options. -d -dc -dp These three options are equivalent; multiple forms are supported for compatibility with other linkers. They assign space to common symbols even if a relocatable output file is specified (with -r). The script command "FORCE_COMMON_ALLOCATION" has the same effect. --depaudit AUDITLIB -P AUDITLIB Adds AUDITLIB to the "DT_DEPAUDIT" entry of the dynamic section. AUDITLIB is not checked for existence, nor will it use the DT_SONAME specified in the library. If specified multiple times "DT_DEPAUDIT" will contain a colon separated list of audit interfaces to use. This option is only meaningful on ELF platforms supporting the rtld-audit interface. The -P option is provided for Solaris compatibility. --enable-non-contiguous-regions This option avoids generating an error if an input section does not fit a matching output section. The linker tries to allocate the input section to subseque nt matching output sections, and generates an error only if no output section is large enough. This is useful when several non-contiguous memory regions are available and the input section does not require a particular one. The order in which input sections are evaluated does not change, for instance: MEMORY { MEM1 (rwx) : ORIGIN : 0x1000, LENGTH = 0x14 MEM2 (rwx) : ORIGIN : 0x1000, LENGTH = 0x40 MEM3 (rwx) : ORIGIN : 0x2000, LENGTH = 0x40 } SECTIONS { mem1 : { *(.data.*); } > MEM1 mem2 : { *(.data.*); } > MEM2 mem3 : { *(.data.*); } > MEM2 } with input sections: .data.1: size 8 .data.2: size 0x10 .data.3: size 4 results in .data.1 affected to mem1, and .data.2 and .data.3 affected to mem2, even though .data.3 would fit in mem3. This option is incompatible with INSERT statements because it changes the way input sections are mapped to output sections. --enable-non-contiguous-regions-warnings This option enables warnings when "--enable-non-contiguous-regions" allows possibly unexpected matches in sections mapping, potentially leading to silently discarding a section instead of failing because it does not fit any output region. -e entry --entry=entry Use entry as the explicit symbol for beginning execution of your program, rather than the default entry point. If there is no symbol named entry, the linker will try to parse entry as a number, and use that as the entry address (the number will be interpreted in base 10; you may use a leading 0x for base 16, or a leading 0 for base 8). --exclude-libs lib,lib,... Specifies a list of archive libraries from which symbols should not be automatically exported. The library names may be delimited by commas or colons. Specifying "--exclude-libs ALL" excludes symbols in all archive libraries from automatic export. This option is available only for the i386 PE targeted port of the linker and for ELF targeted ports. For i386 PE, symbols explicitly listed in a .def file are still exported, regardless of this option. For ELF targeted ports, symbols affected by this option will be treated as hidden. --exclude-modules-for-implib module,module,... Specifies a list of object files or archive members, from which symbols should not be automatically exported, but which should be copied wholesale into the import library being generated during the link. The module names may be delimited by commas or colons, and must match exactly the filenames used by ld to open the files; for archive members, this is simply the member name, but for object files the name listed must include and match precisely any path used to specify the input file on the linker's command-line. This option is available only for the i386 PE targeted port of the linker. Symbols explicitly listed in a .def file are still exported, regardless of this option. -E --export-dynamic --no-export-dynamic When creating a dynamically linked executable, using the -E option or the --export-dynamic option causes the linker to add all symbols to the dynamic symbol table. The dynamic symbol table is the set of symbols which are visible from dynamic objects at run time. If you do not use either of these options (or use the --no-export-dynamic option to restore the default behavior), the dynamic symbol table will normally contain only those symbols which are referenced by some dynamic object mentioned in the link. If you use "dlopen" to load a dynamic object which needs to refer back to the symbols defined by the program, rather than some other dynamic object, then you will probably need to use this option when linking the program itself. You can also use the dynamic list to control what symbols should be added to the dynamic symbol table if the output format supports it. See the description of --dynamic-list. Note that this option is specific to ELF targeted ports. PE targets support a similar function to export all symbols from a DLL or EXE; see the description of --export-all-symbols below. --export-dynamic-symbol=glob When creating a dynamically linked executable, symbols matching glob will be added to the dynamic symbol table. When creating a shared library, references to symbols matching glob will not be bound to the definitions within the shared library. This option is a no-op when creating a shared library and -Bsymbolic or --dynamic-list are not specified. This option is only meaningful on ELF platforms which support shared libraries. --export-dynamic-symbol-list=file Specify a --export-dynamic-symbol for each pattern in the file. The format of the file is the same as the version node without scope and node name. See VERSION for more information. -EB Link big-endian objects. This affects the default output format. -EL Link little-endian objects. This affects the default output format. -f name --auxiliary=name When creating an ELF shared object, set the internal DT_AUXILIARY field to the specified name. This tells the dynamic linker that the symbol table of the shared object should be used as an auxiliary filter on the symbol table of the shared object name. If you later link a program against this filter object, then, when you run the program, the dynamic linker will see the DT_AUXILIARY field. If the dynamic linker resolves any symbols from the filter object, it will first check whether there is a definition in the shared object name. If there is one, it will be used instead of the definition in the filter object. The shared object name need not exist. Thus the shared object name may be used to provide an alternative implementation of certain functions, perhaps for debugging or for machine-specific performance. This option may be specified more than once. The DT_AUXILIARY entries will be created in the order in which they appear on the command line. -F name --filter=name When creating an ELF shared object, set the internal DT_FILTER field to the specified name. This tells the dynamic linker that the symbol table of the shared object which is being created should be used as a filter on the symbol table of the shared object name. If you later link a program against this filter object, then, when you run the program, the dynamic linker will see the DT_FILTER field. The dynamic linker will resolve symbols according to the symbol table of the filter object as usual, but it will actually link to the definitions found in the shared object name. Thus the filter object can be used to select a subset of the symbols provided by the object name. Some older linkers used the -F option throughout a compilation toolchain for specifying object-file format for both input and output object files. The GNU linker uses other mechanisms for this purpose: the -b, --format, --oformat options, the "TARGET" command in linker scripts, and the "GNUTARGET" environment variable. The GNU linker will ignore the -F option when not creating an ELF shared object. -fini=name When creating an ELF executable or shared object, call NAME when the executable or shared object is unloaded, by setting DT_FINI to the address of the function. By default, the linker uses "_fini" as the function to call. -g Ignored. Provided for compatibility with other tools. -G value --gpsize=value Set the maximum size of objects to be optimized using the GP register to size. This is only meaningful for object file formats such as MIPS ELF that support putting large and small objects into different sections. This is ignored for other object file formats. -h name -soname=name When creating an ELF shared object, set the internal DT_SONAME field to the specified name. When an executable is linked with a shared object which has a DT_SONAME field, then when the executable is run the dynamic linker will attempt to load the shared object specified by the DT_SONAME field rather than using the file name given to the linker. -i Perform an incremental link (same as option -r). -init=name When creating an ELF executable or shared object, call NAME when the executable or shared object is loaded, by setting DT_INIT to the address of the function. By default, the linker uses "_init" as the function to call. -l namespec --library=namespec Add the archive or object file specified by namespec to the list of files to link. This option may be used any number of times. If namespec is of the form :filename, ld will search the library path for a file called filename, otherwise it will search the library path for a file called libnamespec.a. On systems which support shared libraries, ld may also search for files other than libnamespec.a. Specifically, on ELF and SunOS systems, ld will search a directory for a library called libnamespec.so before searching for one called libnamespec.a. (By convention, a ".so" extension indicates a shared library.) Note that this behavior does not apply to :filename, which always specifies a file called filename. The linker will search an archive only once, at the location where it is specified on the command line. If the archive defines a symbol which was undefined in some object which appeared before the archive on the command line, the linker will include the appropriate file(s) from the archive. However, an undefined symbol in an object appearing later on the command line will not cause the linker to search the archive again. See the -( option for a way to force the linker to search archives multiple times. You may list the same archive multiple times on the command line. This type of archive searching is standard for Unix linkers. However, if you are using ld on AIX, note that it is different from the behaviour of the AIX linker. -L searchdir --library-path=searchdir Add path searchdir to the list of paths that ld will search for archive libraries and ld control scripts. You may use this option any number of times. The directories are searched in the order in which they are specified on the command line. Directories specified on the command line are searched before the default directories. All -L options apply to all -l options, regardless of the order in which the options appear. -L options do not affect how ld searches for a linker script unless -T option is specified. If searchdir begins with "=" or $SYSROOT, then this prefix will be replaced by the sysroot prefix, controlled by the --sysroot option, or specified when the linker is configured. The default set of paths searched (without being specified with -L) depends on which emulation mode ld is using, and in some cases also on how it was configured. The paths can also be specified in a link script with the "SEARCH_DIR" command. Directories specified this way are searched at the point in which the linker script appears in the command line. -m emulation Emulate the emulation linker. You can list the available emulations with the --verbose or -V options. If the -m option is not used, the emulation is taken from the "LDEMULATION" environment variable, if that is defined. Otherwise, the default emulation depends upon how the linker was configured. -M --print-map Print a link map to the standard output. A link map provides information about the link, including the following: • Where object files are mapped into memory. • How common symbols are allocated. • All archive members included in the link, with a mention of the symbol which caused the archive member to be brought in. • The values assigned to symbols. Note - symbols whose values are computed by an expression which involves a reference to a previous value of the same symbol may not have correct result displayed in the link map. This is because the linker discards intermediate results and only retains the final value of an expression. Under such circumstances the linker will display the final value enclosed by square brackets. Thus for example a linker script containing: foo = 1 foo = foo * 4 foo = foo + 8 will produce the following output in the link map if the -M option is used: 0x00000001 foo = 0x1 [0x0000000c] foo = (foo * 0x4) [0x0000000c] foo = (foo + 0x8) See Expressions for more information about expressions in linker scripts. • How GNU properties are merged. When the linker merges input .note.gnu.property sections into one output .note.gnu.property section, some properties are removed or updated. These actions are reported in the link map. For example: Removed property 0xc0000002 to merge foo.o (0x1) and bar.o (not found) This indicates that property 0xc0000002 is removed from output when merging properties in foo.o, whose property 0xc0000002 value is 0x1, and bar.o, which doesn't have property 0xc0000002. Updated property 0xc0010001 (0x1) to merge foo.o (0x1) and bar.o (0x1) This indicates that property 0xc0010001 value is updated to 0x1 in output when merging properties in foo.o, whose 0xc0010001 property value is 0x1, and bar.o, whose 0xc0010001 property value is 0x1. --print-map-discarded --no-print-map-discarded Print (or do not print) the list of discarded and garbage collected sections in the link map. Enabled by default. -n --nmagic Turn off page alignment of sections, and disable linking against shared libraries. If the output format supports Unix style magic numbers, mark the output as "NMAGIC". -N --omagic Set the text and data sections to be readable and writable. Also, do not page-align the data segment, and disable linking against shared libraries. If the output format supports Unix style magic numbers, mark the output as "OMAGIC". Note: Although a writable text section is allowed for PE-COFF targets, it does not conform to the format specification published by Microsoft. --no-omagic This option negates most of the effects of the -N option. It sets the text section to be read-only, and forces the data segment to be page-aligned. Note - this option does not enable linking against shared libraries. Use -Bdynamic for this. -o output --output=output Use output as the name for the program produced by ld; if this option is not specified, the name a.out is used by default. The script command "OUTPUT" can also specify the output file name. --dependency-file=depfile Write a dependency file to depfile. This file contains a rule suitable for "make" describing the output file and all the input files that were read to produce it. The output is similar to the compiler's output with -M -MP. Note that there is no option like the compiler's -MM, to exclude "system files" (which is not a well-specified concept in the linker, unlike "system headers" in the compiler). So the output from --dependency-file is always specific to the exact state of the installation where it was produced, and should not be copied into distributed makefiles without careful editing. -O level If level is a numeric values greater than zero ld optimizes the output. This might take significantly longer and therefore probably should only be enabled for the final binary. At the moment this option only affects ELF shared library generation. Future releases of the linker may make more use of this option. Also currently there is no difference in the linker's behaviour for different non-zero values of this option. Again this may change with future releases. -plugin name Involve a plugin in the linking process. The name parameter is the absolute filename of the plugin. Usually this parameter is automatically added by the complier, when using link time optimization, but users can also add their own plugins if they so wish. Note that the location of the compiler originated plugins is different from the place where the ar, nm and ranlib programs search for their plugins. In order for those commands to make use of a compiler based plugin it must first be copied into the ${libdir}/bfd-plugins directory. All gcc based linker plugins are backward compatible, so it is sufficient to just copy in the newest one. --push-state The --push-state allows one to preserve the current state of the flags which govern the input file handling so that they can all be restored with one corresponding --pop-state option. The option which are covered are: -Bdynamic, -Bstatic, -dn, -dy, -call_shared, -non_shared, -static, -N, -n, --whole-archive, --no-whole-archive, -r, -Ur, --copy-dt-needed-entries, --no-copy-dt-needed-entries, --as-needed, --no-as-needed, and -a. One target for this option are specifications for pkg-config. When used with the --libs option all possibly needed libraries are listed and then possibly linked with all the time. It is better to return something as follows: -Wl,--push-state,--as-needed -libone -libtwo -Wl,--pop-state --pop-state Undoes the effect of --push-state, restores the previous values of the flags governing input file handling. -q --emit-relocs Leave relocation sections and contents in fully linked executables. Post link analysis and optimization tools may need this information in order to perform correct modifications of executables. This results in larger executables. This option is currently only supported on ELF platforms. --force-dynamic Force the output file to have dynamic sections. This option is specific to VxWorks targets. -r --relocatable Generate relocatable output---i.e., generate an output file that can in turn serve as input to ld. This is often called partial linking. As a side effect, in environments that support standard Unix magic numbers, this option also sets the output file's magic number to "OMAGIC". If this option is not specified, an absolute file is produced. When linking C++ programs, this option will not resolve references to constructors; to do that, use -Ur. When an input file does not have the same format as the output file, partial linking is only supported if that input file does not contain any relocations. Different output formats can have further restrictions; for example some "a.out"-based formats do not support partial linking with input files in other formats at all. This option does the same thing as -i. -R filename --just-symbols=filename Read symbol names and their addresses from filename, but do not relocate it or include it in the output. This allows your output file to refer symbolically to absolute locations of memory defined in other programs. You may use this option more than once. For compatibility with other ELF linkers, if the -R option is followed by a directory name, rather than a file name, it is treated as the -rpath option. -s --strip-all Omit all symbol information from the output file. -S --strip-debug Omit debugger symbol information (but not all symbols) from the output file. --strip-discarded --no-strip-discarded Omit (or do not omit) global symbols defined in discarded sections. Enabled by default. -t --trace Print the names of the input files as ld processes them. If -t is given twice then members within archives are also printed. -t output is useful to generate a list of all the object files and scripts involved in linking, for example, when packaging files for a linker bug report. -T scriptfile --script=scriptfile Use scriptfile as the linker script. This script replaces ld's default linker script (rather than adding to it), so commandfile must specify everything necessary to describe the output file. If scriptfile does not exist in the current directory, "ld" looks for it in the directories specified by any preceding -L options. Multiple -T options accumulate. -dT scriptfile --default-script=scriptfile Use scriptfile as the default linker script. This option is similar to the --script option except that processing of the script is delayed until after the rest of the command line has been processed. This allows options placed after the --default-script option on the command line to affect the behaviour of the linker script, which can be important when the linker command line cannot be directly controlled by the user. (eg because the command line is being constructed by another tool, such as gcc). -u symbol --undefined=symbol Force symbol to be entered in the output file as an undefined symbol. Doing this may, for example, trigger linking of additional modules from standard libraries. -u may be repeated with different option arguments to enter additional undefined symbols. This option is equivalent to the "EXTERN" linker script command. If this option is being used to force additional modules to be pulled into the link, and if it is an error for the symbol to remain undefined, then the option --require-defined should be used instead. --require-defined=symbol Require that symbol is defined in the output file. This option is the same as option --undefined except that if symbol is not defined in the output file then the linker will issue an error and exit. The same effect can be achieved in a linker script by using "EXTERN", "ASSERT" and "DEFINED" together. This option can be used multiple times to require additional symbols. -Ur For anything other than C++ programs, this option is equivalent to -r: it generates relocatable output---i.e., an output file that can in turn serve as input to ld. When linking C++ programs, -Ur does resolve references to constructors, unlike -r. It does not work to use -Ur on files that were themselves linked with -Ur; once the constructor table has been built, it cannot be added to. Use -Ur only for the last partial link, and -r for the others. --orphan-handling=MODE Control how orphan sections are handled. An orphan section is one not specifically mentioned in a linker script. MODE can have any of the following values: "place" Orphan sections are placed into a suitable output section following the strategy described in Orphan Sections. The option --unique also affects how sections are placed. "discard" All orphan sections are discarded, by placing them in the /DISCARD/ section. "warn" The linker will place the orphan section as for "place" and also issue a warning. "error" The linker will exit with an error if any orphan section is found. The default if --orphan-handling is not given is "place". --unique[=SECTION] Creates a separate output section for every input section matching SECTION, or if the optional wildcard SECTION argument is missing, for every orphan input section. An orphan section is one not specifically mentioned in a linker script. You may use this option multiple times on the command line; It prevents the normal merging of input sections with the same name, overriding output section assignments in a linker script. -v --version -V Display the version number for ld. The -V option also lists the supported emulations. -x --discard-all Delete all local symbols. -X --discard-locals Delete all temporary local symbols. (These symbols start with system-specific local label prefixes, typically .L for ELF systems or L for traditional a.out systems.) -y symbol --trace-symbol=symbol Print the name of each linked file in which symbol appears. This option may be given any number of times. On many systems it is necessary to prepend an underscore. This option is useful when you have an undefined symbol in your link but don't know where the reference is coming from. -Y path Add path to the default library search path. This option exists for Solaris compatibility. -z keyword The recognized keywords are: call-nop=prefix-addr call-nop=suffix-nop call-nop=prefix-byte call-nop=suffix-byte Specify the 1-byte "NOP" padding when transforming indirect call to a locally defined function, foo, via its GOT slot. call-nop=prefix-addr generates "0x67 call foo". call-nop=suffix-nop generates "call foo 0x90". call-nop=prefix-byte generates "byte call foo". call-nop=suffix-byte generates "call foo byte". Supported for i386 and x86_64. cet-report=none cet-report=warning cet-report=error Specify how to report the missing GNU_PROPERTY_X86_FEATURE_1_IBT and GNU_PROPERTY_X86_FEATURE_1_SHSTK properties in input .note.gnu.property section. cet-report=none, which is the default, will make the linker not report missing properties in input files. cet-report=warning will make the linker issue a warning for missing properties in input files. cet-report=error will make the linker issue an error for missing properties in input files. Note that ibt will turn off the missing GNU_PROPERTY_X86_FEATURE_1_IBT property report and shstk will turn off the missing GNU_PROPERTY_X86_FEATURE_1_SHSTK property report. Supported for Linux/i386 and Linux/x86_64. combreloc nocombreloc Combine multiple dynamic relocation sections and sort to improve dynamic symbol lookup caching. Do not do this if nocombreloc. common nocommon Generate common symbols with STT_COMMON type during a relocatable link. Use STT_OBJECT type if nocommon. common-page-size=value Set the page size most commonly used to value. Memory image layout will be optimized to minimize memory pages if the system is using pages of this size. defs Report unresolved symbol references from regular object files. This is done even if the linker is creating a non-symbolic shared library. This option is the inverse of -z undefs. dynamic-undefined-weak nodynamic-undefined-weak Make undefined weak symbols dynamic when building a dynamic object, if they are referenced from a regular object file and not forced local by symbol visibility or versioning. Do not make them dynamic if nodynamic- undefined-weak. If neither option is given, a target may default to either option being in force, or make some other selection of undefined weak symbols dynamic. Not all targets support these options. execstack Marks the object as requiring executable stack. global This option is only meaningful when building a shared object. It makes the symbols defined by this shared object available for symbol resolution of subsequently loaded libraries. globalaudit This option is only meaningful when building a dynamic executable. This option marks the executable as requiring global auditing by setting the "DF_1_GLOBAUDIT" bit in the "DT_FLAGS_1" dynamic tag. Global auditing requires that any auditing library defined via the --depaudit or -P command-line options be run for all dynamic objects loaded by the application. ibtplt Generate Intel Indirect Branch Tracking (IBT) enabled PLT entries. Supported for Linux/i386 and Linux/x86_64. ibt Generate GNU_PROPERTY_X86_FEATURE_1_IBT in .note.gnu.property section to indicate compatibility with IBT. This also implies ibtplt. Supported for Linux/i386 and Linux/x86_64. indirect-extern-access noindirect-extern-access Generate GNU_PROPERTY_1_NEEDED_INDIRECT_EXTERN_ACCESS in .note.gnu.property section to indicate that object file requires canonical function pointers and cannot be used with copy relocation. This option also implies noextern- protected-data and nocopyreloc. Supported for i386 and x86-64. noindirect-extern-access removes GNU_PROPERTY_1_NEEDED_INDIRECT_EXTERN_ACCESS from .note.gnu.property section. initfirst This option is only meaningful when building a shared object. It marks the object so that its runtime initialization will occur before the runtime initialization of any other objects brought into the process at the same time. Similarly the runtime finalization of the object will occur after the runtime finalization of any other objects. interpose Specify that the dynamic loader should modify its symbol search order so that symbols in this shared library interpose all other shared libraries not so marked. unique nounique When generating a shared library or other dynamically loadable ELF object mark it as one that should (by default) only ever be loaded once, and only in the main namespace (when using "dlmopen"). This is primarily used to mark fundamental libraries such as libc, libpthread et al which do not usually function correctly unless they are the sole instances of themselves. This behaviour can be overridden by the "dlmopen" caller and does not apply to certain loading mechanisms (such as audit libraries). lam-u48 Generate GNU_PROPERTY_X86_FEATURE_1_LAM_U48 in .note.gnu.property section to indicate compatibility with Intel LAM_U48. Supported for Linux/x86_64. lam-u57 Generate GNU_PROPERTY_X86_FEATURE_1_LAM_U57 in .note.gnu.property section to indicate compatibility with Intel LAM_U57. Supported for Linux/x86_64. lam-u48-report=none lam-u48-report=warning lam-u48-report=error Specify how to report the missing GNU_PROPERTY_X86_FEATURE_1_LAM_U48 property in input .note.gnu.property section. lam-u48-report=none, which is the default, will make the linker not report missing properties in input files. lam-u48-report=warning will make the linker issue a warning for missing properties in input files. lam-u48-report=error will make the linker issue an error for missing properties in input files. Supported for Linux/x86_64. lam-u57-report=none lam-u57-report=warning lam-u57-report=error Specify how to report the missing GNU_PROPERTY_X86_FEATURE_1_LAM_U57 property in input .note.gnu.property section. lam-u57-report=none, which is the default, will make the linker not report missing properties in input files. lam-u57-report=warning will make the linker issue a warning for missing properties in input files. lam-u57-report=error will make the linker issue an error for missing properties in input files. Supported for Linux/x86_64. lam-report=none lam-report=warning lam-report=error Specify how to report the missing GNU_PROPERTY_X86_FEATURE_1_LAM_U48 and GNU_PROPERTY_X86_FEATURE_1_LAM_U57 properties in input .note.gnu.property section. lam-report=none, which is the default, will make the linker not report missing properties in input files. lam-report=warning will make the linker issue a warning for missing properties in input files. lam-report=error will make the linker issue an error for missing properties in input files. Supported for Linux/x86_64. lazy When generating an executable or shared library, mark it to tell the dynamic linker to defer function call resolution to the point when the function is called (lazy binding), rather than at load time. Lazy binding is the default. loadfltr Specify that the object's filters be processed immediately at runtime. max-page-size=value Set the maximum memory page size supported to value. muldefs Allow multiple definitions. nocopyreloc Disable linker generated .dynbss variables used in place of variables defined in shared libraries. May result in dynamic text relocations. nodefaultlib Specify that the dynamic loader search for dependencies of this object should ignore any default library search paths. nodelete Specify that the object shouldn't be unloaded at runtime. nodlopen Specify that the object is not available to "dlopen". nodump Specify that the object can not be dumped by "dldump". noexecstack Marks the object as not requiring executable stack. noextern-protected-data Don't treat protected data symbols as external when building a shared library. This option overrides the linker backend default. It can be used to work around incorrect relocations against protected data symbols generated by compiler. Updates on protected data symbols by another module aren't visible to the resulting shared library. Supported for i386 and x86-64. noreloc-overflow Disable relocation overflow check. This can be used to disable relocation overflow check if there will be no dynamic relocation overflow at run-time. Supported for x86_64. now When generating an executable or shared library, mark it to tell the dynamic linker to resolve all symbols when the program is started, or when the shared library is loaded by dlopen, instead of deferring function call resolution to the point when the function is first called. origin Specify that the object requires $ORIGIN handling in paths. pack-relative-relocs nopack-relative-relocs Generate compact relative relocation in position- independent executable and shared library. It adds "DT_RELR", "DT_RELRSZ" and "DT_RELRENT" entries to the dynamic section. It is ignored when building position- dependent executable and relocatable output. nopack- relative-relocs is the default, which disables compact relative relocation. When linked against the GNU C Library, a GLIBC_ABI_DT_RELR symbol version dependency on the shared C Library is added to the output. Supported for i386 and x86-64. relro norelro Create an ELF "PT_GNU_RELRO" segment header in the object. This specifies a memory segment that should be made read-only after relocation, if supported. Specifying common-page-size smaller than the system page size will render this protection ineffective. Don't create an ELF "PT_GNU_RELRO" segment if norelro. report-relative-reloc Report dynamic relative relocations generated by linker. Supported for Linux/i386 and Linux/x86_64. separate-code noseparate-code Create separate code "PT_LOAD" segment header in the object. This specifies a memory segment that should contain only instructions and must be in wholly disjoint pages from any other data. Don't create separate code "PT_LOAD" segment if noseparate-code is used. shstk Generate GNU_PROPERTY_X86_FEATURE_1_SHSTK in .note.gnu.property section to indicate compatibility with Intel Shadow Stack. Supported for Linux/i386 and Linux/x86_64. stack-size=value Specify a stack size for an ELF "PT_GNU_STACK" segment. Specifying zero will override any default non-zero sized "PT_GNU_STACK" segment creation. start-stop-gc nostart-stop-gc When --gc-sections is in effect, a reference from a retained section to "__start_SECNAME" or "__stop_SECNAME" causes all input sections named "SECNAME" to also be retained, if "SECNAME" is representable as a C identifier and either "__start_SECNAME" or "__stop_SECNAME" is synthesized by the linker. -z start-stop-gc disables this effect, allowing sections to be garbage collected as if the special synthesized symbols were not defined. -z start-stop-gc has no effect on a definition of "__start_SECNAME" or "__stop_SECNAME" in an object file or linker script. Such a definition will prevent the linker providing a synthesized "__start_SECNAME" or "__stop_SECNAME" respectively, and therefore the special treatment by garbage collection for those references. start-stop-visibility=value Specify the ELF symbol visibility for synthesized "__start_SECNAME" and "__stop_SECNAME" symbols. value must be exactly default, internal, hidden, or protected. If no -z start-stop-visibility option is given, protected is used for compatibility with historical practice. However, it's highly recommended to use -z start-stop-visibility=hidden in new programs and shared libraries so that these symbols are not exported between shared objects, which is not usually what's intended. text notext textoff Report an error if DT_TEXTREL is set, i.e., if the position-independent or shared object has dynamic relocations in read-only sections. Don't report an error if notext or textoff. undefs Do not report unresolved symbol references from regular object files, either when creating an executable, or when creating a shared library. This option is the inverse of -z defs. unique-symbol nounique-symbol Avoid duplicated local symbol names in the symbol string table. Append "."number"" to duplicated local symbol names if unique-symbol is used. nounique-symbol is the default. x86-64-baseline x86-64-v2 x86-64-v3 x86-64-v4 Specify the x86-64 ISA level needed in .note.gnu.property section. x86-64-baseline generates "GNU_PROPERTY_X86_ISA_1_BASELINE". x86-64-v2 generates "GNU_PROPERTY_X86_ISA_1_V2". x86-64-v3 generates "GNU_PROPERTY_X86_ISA_1_V3". x86-64-v4 generates "GNU_PROPERTY_X86_ISA_1_V4". Supported for Linux/i386 and Linux/x86_64. Other keywords are ignored for Solaris compatibility. -( archives -) --start-group archives --end-group The archives should be a list of archive files. They may be either explicit file names, or -l options. The specified archives are searched repeatedly until no new undefined references are created. Normally, an archive is searched only once in the order that it is specified on the command line. If a symbol in that archive is needed to resolve an undefined symbol referred to by an object in an archive that appears later on the command line, the linker would not be able to resolve that reference. By grouping the archives, they will all be searched repeatedly until all possible references are resolved. Using this option has a significant performance cost. It is best to use it only when there are unavoidable circular references between two or more archives. --accept-unknown-input-arch --no-accept-unknown-input-arch Tells the linker to accept input files whose architecture cannot be recognised. The assumption is that the user knows what they are doing and deliberately wants to link in these unknown input files. This was the default behaviour of the linker, before release 2.14. The default behaviour from release 2.14 onwards is to reject such input files, and so the --accept-unknown-input-arch option has been added to restore the old behaviour. --as-needed --no-as-needed This option affects ELF DT_NEEDED tags for dynamic libraries mentioned on the command line after the --as-needed option. Normally the linker will add a DT_NEEDED tag for each dynamic library mentioned on the command line, regardless of whether the library is actually needed or not. --as-needed causes a DT_NEEDED tag to only be emitted for a library that at that point in the link satisfies a non-weak undefined symbol reference from a regular object file or, if the library is not found in the DT_NEEDED lists of other needed libraries, a non-weak undefined symbol reference from another needed dynamic library. Object files or libraries appearing on the command line after the library in question do not affect whether the library is seen as needed. This is similar to the rules for extraction of object files from archives. --no-as-needed restores the default behaviour. Note: On Linux based systems the --as-needed option also has an affect on the behaviour of the --rpath and --rpath-link options. See the description of --rpath-link for more details. --add-needed --no-add-needed These two options have been deprecated because of the similarity of their names to the --as-needed and --no-as-needed options. They have been replaced by --copy-dt-needed-entries and --no-copy-dt-needed-entries. -assert keyword This option is ignored for SunOS compatibility. -Bdynamic -dy -call_shared Link against dynamic libraries. This is only meaningful on platforms for which shared libraries are supported. This option is normally the default on such platforms. The different variants of this option are for compatibility with various systems. You may use this option multiple times on the command line: it affects library searching for -l options which follow it. -Bgroup Set the "DF_1_GROUP" flag in the "DT_FLAGS_1" entry in the dynamic section. This causes the runtime linker to handle lookups in this object and its dependencies to be performed only inside the group. --unresolved-symbols=report-all is implied. This option is only meaningful on ELF platforms which support shared libraries. -Bstatic -dn -non_shared -static Do not link against shared libraries. This is only meaningful on platforms for which shared libraries are supported. The different variants of this option are for compatibility with various systems. You may use this option multiple times on the command line: it affects library searching for -l options which follow it. This option also implies --unresolved-symbols=report-all. This option can be used with -shared. Doing so means that a shared library is being created but that all of the library's external references must be resolved by pulling in entries from static libraries. -Bsymbolic When creating a shared library, bind references to global symbols to the definition within the shared library, if any. Normally, it is possible for a program linked against a shared library to override the definition within the shared library. This option is only meaningful on ELF platforms which support shared libraries. -Bsymbolic-functions When creating a shared library, bind references to global function symbols to the definition within the shared library, if any. This option is only meaningful on ELF platforms which support shared libraries. -Bno-symbolic This option can cancel previously specified -Bsymbolic and -Bsymbolic-functions. --dynamic-list=dynamic-list-file Specify the name of a dynamic list file to the linker. This is typically used when creating shared libraries to specify a list of global symbols whose references shouldn't be bound to the definition within the shared library, or creating dynamically linked executables to specify a list of symbols which should be added to the symbol table in the executable. This option is only meaningful on ELF platforms which support shared libraries. The format of the dynamic list is the same as the version node without scope and node name. See VERSION for more information. --dynamic-list-data Include all global data symbols to the dynamic list. --dynamic-list-cpp-new Provide the builtin dynamic list for C++ operator new and delete. It is mainly useful for building shared libstdc++. --dynamic-list-cpp-typeinfo Provide the builtin dynamic list for C++ runtime type identification. --check-sections --no-check-sections Asks the linker not to check section addresses after they have been assigned to see if there are any overlaps. Normally the linker will perform this check, and if it finds any overlaps it will produce suitable error messages. The linker does know about, and does make allowances for sections in overlays. The default behaviour can be restored by using the command-line switch --check-sections. Section overlap is not usually checked for relocatable links. You can force checking in that case by using the --check-sections option. --copy-dt-needed-entries --no-copy-dt-needed-entries This option affects the treatment of dynamic libraries referred to by DT_NEEDED tags inside ELF dynamic libraries mentioned on the command line. Normally the linker won't add a DT_NEEDED tag to the output binary for each library mentioned in a DT_NEEDED tag in an input dynamic library. With --copy-dt-needed-entries specified on the command line however any dynamic libraries that follow it will have their DT_NEEDED entries added. The default behaviour can be restored with --no-copy-dt-needed-entries. This option also has an effect on the resolution of symbols in dynamic libraries. With --copy-dt-needed-entries dynamic libraries mentioned on the command line will be recursively searched, following their DT_NEEDED tags to other libraries, in order to resolve symbols required by the output binary. With the default setting however the searching of dynamic libraries that follow it will stop with the dynamic library itself. No DT_NEEDED links will be traversed to resolve symbols. --cref Output a cross reference table. If a linker map file is being generated, the cross reference table is printed to the map file. Otherwise, it is printed on the standard output. The format of the table is intentionally simple, so that it may be easily processed by a script if necessary. The symbols are printed out, sorted by name. For each symbol, a list of file names is given. If the symbol is defined, the first file listed is the location of the definition. If the symbol is defined as a common value then any files where this happens appear next. Finally any files that reference the symbol are listed. --ctf-variables --no-ctf-variables The CTF debuginfo format supports a section which encodes the names and types of variables found in the program which do not appear in any symbol table. These variables clearly cannot be looked up by address by conventional debuggers, so the space used for their types and names is usually wasted: the types are usually small but the names are often not. --ctf-variables causes the generation of such a section. The default behaviour can be restored with --no-ctf-variables. --ctf-share-types=method Adjust the method used to share types between translation units in CTF. share-unconflicted Put all types that do not have ambiguous definitions into the shared dictionary, where debuggers can easily access them, even if they only occur in one translation unit. This is the default. share-duplicated Put only types that occur in multiple translation units into the shared dictionary: types with only one definition go into per-translation-unit dictionaries. Types with ambiguous definitions in multiple translation units always go into per-translation-unit dictionaries. This tends to make the CTF larger, but may reduce the amount of CTF in the shared dictionary. For very large projects this may speed up opening the CTF and save memory in the CTF consumer at runtime. --no-define-common This option inhibits the assignment of addresses to common symbols. The script command "INHIBIT_COMMON_ALLOCATION" has the same effect. The --no-define-common option allows decoupling the decision to assign addresses to Common symbols from the choice of the output file type; otherwise a non-Relocatable output type forces assigning addresses to Common symbols. Using --no-define-common allows Common symbols that are referenced from a shared library to be assigned addresses only in the main program. This eliminates the unused duplicate space in the shared library, and also prevents any possible confusion over resolving to the wrong duplicate when there are many dynamic modules with specialized search paths for runtime symbol resolution. --force-group-allocation This option causes the linker to place section group members like normal input sections, and to delete the section groups. This is the default behaviour for a final link but this option can be used to change the behaviour of a relocatable link (-r). The script command "FORCE_GROUP_ALLOCATION" has the same effect. --defsym=symbol=expression Create a global symbol in the output file, containing the absolute address given by expression. You may use this option as many times as necessary to define multiple symbols in the command line. A limited form of arithmetic is supported for the expression in this context: you may give a hexadecimal constant or the name of an existing symbol, or use "+" and "-" to add or subtract hexadecimal constants or symbols. If you need more elaborate expressions, consider using the linker command language from a script. Note: there should be no white space between symbol, the equals sign ("="), and expression. The linker processes --defsym arguments and -T arguments in order, placing --defsym before -T will define the symbol before the linker script from -T is processed, while placing --defsym after -T will define the symbol after the linker script has been processed. This difference has consequences for expressions within the linker script that use the --defsym symbols, which order is correct will depend on what you are trying to achieve. --demangle[=style] --no-demangle These options control whether to demangle symbol names in error messages and other output. When the linker is told to demangle, it tries to present symbol names in a readable fashion: it strips leading underscores if they are used by the object file format, and converts C++ mangled symbol names into user readable names. Different compilers have different mangling styles. The optional demangling style argument can be used to choose an appropriate demangling style for your compiler. The linker will demangle by default unless the environment variable COLLECT_NO_DEMANGLE is set. These options may be used to override the default. -Ifile --dynamic-linker=file Set the name of the dynamic linker. This is only meaningful when generating dynamically linked ELF executables. The default dynamic linker is normally correct; don't use this unless you know what you are doing. --no-dynamic-linker When producing an executable file, omit the request for a dynamic linker to be used at load-time. This is only meaningful for ELF executables that contain dynamic relocations, and usually requires entry point code that is capable of processing these relocations. --embedded-relocs This option is similar to the --emit-relocs option except that the relocs are stored in a target-specific section. This option is only supported by the BFIN, CR16 and M68K targets. --disable-multiple-abs-defs Do not allow multiple definitions with symbols included in filename invoked by -R or --just-symbols --fatal-warnings --no-fatal-warnings Treat all warnings as errors. The default behaviour can be restored with the option --no-fatal-warnings. -w --no-warnings Do not display any warning or error messages. This overrides --fatal-warnings if it has been enabled. This option can be used when it is known that the output binary will not work, but there is still a need to create it. --force-exe-suffix Make sure that an output file has a .exe suffix. If a successfully built fully linked output file does not have a ".exe" or ".dll" suffix, this option forces the linker to copy the output file to one of the same name with a ".exe" suffix. This option is useful when using unmodified Unix makefiles on a Microsoft Windows host, since some versions of Windows won't run an image unless it ends in a ".exe" suffix. --gc-sections --no-gc-sections Enable garbage collection of unused input sections. It is ignored on targets that do not support this option. The default behaviour (of not performing this garbage collection) can be restored by specifying --no-gc-sections on the command line. Note that garbage collection for COFF and PE format targets is supported, but the implementation is currently considered to be experimental. --gc-sections decides which input sections are used by examining symbols and relocations. The section containing the entry symbol and all sections containing symbols undefined on the command-line will be kept, as will sections containing symbols referenced by dynamic objects. Note that when building shared libraries, the linker must assume that any visible symbol is referenced. Once this initial set of sections has been determined, the linker recursively marks as used any section referenced by their relocations. See --entry, --undefined, and --gc-keep-exported. This option can be set when doing a partial link (enabled with option -r). In this case the root of symbols kept must be explicitly specified either by one of the options --entry, --undefined, or --gc-keep-exported or by a "ENTRY" command in the linker script. As a GNU extension, ELF input sections marked with the "SHF_GNU_RETAIN" flag will not be garbage collected. --print-gc-sections --no-print-gc-sections List all sections removed by garbage collection. The listing is printed on stderr. This option is only effective if garbage collection has been enabled via the --gc-sections) option. The default behaviour (of not listing the sections that are removed) can be restored by specifying --no-print-gc-sections on the command line. --gc-keep-exported When --gc-sections is enabled, this option prevents garbage collection of unused input sections that contain global symbols having default or protected visibility. This option is intended to be used for executables where unreferenced sections would otherwise be garbage collected regardless of the external visibility of contained symbols. Note that this option has no effect when linking shared objects since it is already the default behaviour. This option is only supported for ELF format targets. --print-output-format Print the name of the default output format (perhaps influenced by other command-line options). This is the string that would appear in an "OUTPUT_FORMAT" linker script command. --print-memory-usage Print used size, total size and used size of memory regions created with the MEMORY command. This is useful on embedded targets to have a quick view of amount of free memory. The format of the output has one headline and one line per region. It is both human readable and easily parsable by tools. Here is an example of an output: Memory region Used Size Region Size %age Used ROM: 256 KB 1 MB 25.00% RAM: 32 B 2 GB 0.00% --help Print a summary of the command-line options on the standard output and exit. --target-help Print a summary of all target-specific options on the standard output and exit. -Map=mapfile Print a link map to the file mapfile. See the description of the -M option, above. If mapfile is just the character "-" then the map will be written to stdout. Specifying a directory as mapfile causes the linker map to be written as a file inside the directory. Normally name of the file inside the directory is computed as the basename of the output file with ".map" appended. If however the special character "%" is used then this will be replaced by the full path of the output file. Additionally if there are any characters after the % symbol then ".map" will no longer be appended. -o foo.exe -Map=bar [Creates ./bar] -o ../dir/foo.exe -Map=bar [Creates ./bar] -o foo.exe -Map=../dir [Creates ../dir/foo.exe.map] -o ../dir2/foo.exe -Map=../dir [Creates ../dir/foo.exe.map] -o foo.exe -Map=% [Creates ./foo.exe.map] -o ../dir/foo.exe -Map=% [Creates ../dir/foo.exe.map] -o foo.exe -Map=%.bar [Creates ./foo.exe.bar] -o ../dir/foo.exe -Map=%.bar [Creates ../dir/foo.exe.bar] -o ../dir2/foo.exe -Map=../dir/% [Creates ../dir/../dir2/foo.exe.map] -o ../dir2/foo.exe -Map=../dir/%.bar [Creates ../dir/../dir2/foo.exe.bar] It is an error to specify more than one "%" character. If the map file already exists then it will be overwritten by this operation. --no-keep-memory ld normally optimizes for speed over memory usage by caching the symbol tables of input files in memory. This option tells ld to instead optimize for memory usage, by rereading the symbol tables as necessary. This may be required if ld runs out of memory space while linking a large executable. --no-undefined -z defs Report unresolved symbol references from regular object files. This is done even if the linker is creating a non- symbolic shared library. The switch --[no-]allow-shlib-undefined controls the behaviour for reporting unresolved references found in shared libraries being linked in. The effects of this option can be reverted by using "-z undefs". --allow-multiple-definition -z muldefs Normally when a symbol is defined multiple times, the linker will report a fatal error. These options allow multiple definitions and the first definition will be used. --allow-shlib-undefined --no-allow-shlib-undefined Allows or disallows undefined symbols in shared libraries. This switch is similar to --no-undefined except that it determines the behaviour when the undefined symbols are in a shared library rather than a regular object file. It does not affect how undefined symbols in regular object files are handled. The default behaviour is to report errors for any undefined symbols referenced in shared libraries if the linker is being used to create an executable, but to allow them if the linker is being used to create a shared library. The reasons for allowing undefined symbol references in shared libraries specified at link time are that: • A shared library specified at link time may not be the same as the one that is available at load time, so the symbol might actually be resolvable at load time. • There are some operating systems, eg BeOS and HPPA, where undefined symbols in shared libraries are normal. The BeOS kernel for example patches shared libraries at load time to select whichever function is most appropriate for the current architecture. This is used, for example, to dynamically select an appropriate memset function. --error-handling-script=scriptname If this option is provided then the linker will invoke scriptname whenever an error is encountered. Currently however only two kinds of error are supported: missing symbols and missing libraries. Two arguments will be passed to script: the keyword "undefined-symbol" or `missing-lib" and the name of the undefined symbol or missing library. The intention is that the script will provide suggestions to the user as to where the symbol or library might be found. After the script has finished then the normal linker error message will be displayed. The availability of this option is controlled by a configure time switch, so it may not be present in specific implementations. --no-undefined-version Normally when a symbol has an undefined version, the linker will ignore it. This option disallows symbols with undefined version and a fatal error will be issued instead. --default-symver Create and use a default symbol version (the soname) for unversioned exported symbols. --default-imported-symver Create and use a default symbol version (the soname) for unversioned imported symbols. --no-warn-mismatch Normally ld will give an error if you try to link together input files that are mismatched for some reason, perhaps because they have been compiled for different processors or for different endiannesses. This option tells ld that it should silently permit such possible errors. This option should only be used with care, in cases when you have taken some special action that ensures that the linker errors are inappropriate. --no-warn-search-mismatch Normally ld will give a warning if it finds an incompatible library during a library search. This option silences the warning. --no-whole-archive Turn off the effect of the --whole-archive option for subsequent archive files. --noinhibit-exec Retain the executable output file whenever it is still usable. Normally, the linker will not produce an output file if it encounters errors during the link process; it exits without writing an output file when it issues any error whatsoever. -nostdlib Only search library directories explicitly specified on the command line. Library directories specified in linker scripts (including linker scripts specified on the command line) are ignored. --oformat=output-format ld may be configured to support more than one kind of object file. If your ld is configured this way, you can use the --oformat option to specify the binary format for the output object file. Even when ld is configured to support alternative object formats, you don't usually need to specify this, as ld should be configured to produce as a default output format the most usual format on each machine. output- format is a text string, the name of a particular format supported by the BFD libraries. (You can list the available binary formats with objdump -i.) The script command "OUTPUT_FORMAT" can also specify the output format, but this option overrides it. --out-implib file Create an import library in file corresponding to the executable the linker is generating (eg. a DLL or ELF program). This import library (which should be called "*.dll.a" or "*.a" for DLLs) may be used to link clients against the generated executable; this behaviour makes it possible to skip a separate import library creation step (eg. "dlltool" for DLLs). This option is only available for the i386 PE and ELF targetted ports of the linker. -pie --pic-executable Create a position independent executable. This is currently only supported on ELF platforms. Position independent executables are similar to shared libraries in that they are relocated by the dynamic linker to the virtual address the OS chooses for them (which can vary between invocations). Like normal dynamically linked executables they can be executed and symbols defined in the executable cannot be overridden by shared libraries. -no-pie Create a position dependent executable. This is the default. -qmagic This option is ignored for Linux compatibility. -Qy This option is ignored for SVR4 compatibility. --relax --no-relax An option with machine dependent effects. This option is only supported on a few targets. On some platforms the --relax option performs target specific, global optimizations that become possible when the linker resolves addressing in the program, such as relaxing address modes, synthesizing new instructions, selecting shorter version of current instructions, and combining constant values. On some platforms these link time global optimizations may make symbolic debugging of the resulting executable impossible. This is known to be the case for the Matsushita MN10200 and MN10300 family of processors. On platforms where the feature is supported, the option --no-relax will disable it. On platforms where the feature is not supported, both --relax and --no-relax are accepted, but ignored. --retain-symbols-file=filename Retain only the symbols listed in the file filename, discarding all others. filename is simply a flat file, with one symbol name per line. This option is especially useful in environments (such as VxWorks) where a large global symbol table is accumulated gradually, to conserve run-time memory. --retain-symbols-file does not discard undefined symbols, or symbols needed for relocations. You may only specify --retain-symbols-file once in the command line. It overrides -s and -S. -rpath=dir Add a directory to the runtime library search path. This is used when linking an ELF executable with shared objects. All -rpath arguments are concatenated and passed to the runtime linker, which uses them to locate shared objects at runtime. The -rpath option is also used when locating shared objects which are needed by shared objects explicitly included in the link; see the description of the -rpath-link option. Searching -rpath in this way is only supported by native linkers and cross linkers which have been configured with the --with-sysroot option. If -rpath is not used when linking an ELF executable, the contents of the environment variable "LD_RUN_PATH" will be used if it is defined. The -rpath option may also be used on SunOS. By default, on SunOS, the linker will form a runtime search path out of all the -L options it is given. If a -rpath option is used, the runtime search path will be formed exclusively using the -rpath options, ignoring the -L options. This can be useful when using gcc, which adds many -L options which may be on NFS mounted file systems. For compatibility with other ELF linkers, if the -R option is followed by a directory name, rather than a file name, it is treated as the -rpath option. -rpath-link=dir When using ELF or SunOS, one shared library may require another. This happens when an "ld -shared" link includes a shared library as one of the input files. When the linker encounters such a dependency when doing a non-shared, non-relocatable link, it will automatically try to locate the required shared library and include it in the link, if it is not included explicitly. In such a case, the -rpath-link option specifies the first set of directories to search. The -rpath-link option may specify a sequence of directory names either by specifying a list of names separated by colons, or by appearing multiple times. The tokens $ORIGIN and $LIB can appear in these search directories. They will be replaced by the full path to the directory containing the program or shared object in the case of $ORIGIN and either lib - for 32-bit binaries - or lib64 - for 64-bit binaries - in the case of $LIB. The alternative form of these tokens - ${ORIGIN} and ${LIB} can also be used. The token $PLATFORM is not supported. This option should be used with caution as it overrides the search path that may have been hard compiled into a shared library. In such a case it is possible to use unintentionally a different search path than the runtime linker would do. The linker uses the following search paths to locate required shared libraries: 1. Any directories specified by -rpath-link options. 2. Any directories specified by -rpath options. The difference between -rpath and -rpath-link is that directories specified by -rpath options are included in the executable and used at runtime, whereas the -rpath-link option is only effective at link time. Searching -rpath in this way is only supported by native linkers and cross linkers which have been configured with the --with-sysroot option. 3. On an ELF system, for native linkers, if the -rpath and -rpath-link options were not used, search the contents of the environment variable "LD_RUN_PATH". 4. On SunOS, if the -rpath option was not used, search any directories specified using -L options. 5. For a native linker, search the contents of the environment variable "LD_LIBRARY_PATH". 6. For a native ELF linker, the directories in "DT_RUNPATH" or "DT_RPATH" of a shared library are searched for shared libraries needed by it. The "DT_RPATH" entries are ignored if "DT_RUNPATH" entries exist. 7. For a linker for a Linux system, if the file /etc/ld.so.conf exists, the list of directories found in that file. Note: the path to this file is prefixed with the "sysroot" value, if that is defined, and then any "prefix" string if the linker was configured with the --prefix=<path> option. 8. For a native linker on a FreeBSD system, any directories specified by the "_PATH_ELF_HINTS" macro defined in the elf-hints.h header file. 9. Any directories specified by a "SEARCH_DIR" command in a linker script given on the command line, including scripts specified by -T (but not -dT). 10. The default directories, normally /lib and /usr/lib. 11. Any directories specified by a plugin LDPT_SET_EXTRA_LIBRARY_PATH. 12. Any directories specified by a "SEARCH_DIR" command in a default linker script. Note however on Linux based systems there is an additional caveat: If the --as-needed option is active and a shared library is located which would normally satisfy the search and this library does not have DT_NEEDED tag for libc.so and there is a shared library later on in the set of search directories which also satisfies the search and this second shared library does have a DT_NEEDED tag for libc.so then the second library will be selected instead of the first. If the required shared library is not found, the linker will issue a warning and continue with the link. -shared -Bshareable Create a shared library. This is currently only supported on ELF, XCOFF and SunOS platforms. On SunOS, the linker will automatically create a shared library if the -e option is not used and there are undefined symbols in the link. --sort-common --sort-common=ascending --sort-common=descending This option tells ld to sort the common symbols by alignment in ascending or descending order when it places them in the appropriate output sections. The symbol alignments considered are sixteen-byte or larger, eight-byte, four-byte, two-byte, and one-byte. This is to prevent gaps between symbols due to alignment constraints. If no sorting order is specified, then descending order is assumed. --sort-section=name This option will apply "SORT_BY_NAME" to all wildcard section patterns in the linker script. --sort-section=alignment This option will apply "SORT_BY_ALIGNMENT" to all wildcard section patterns in the linker script. --spare-dynamic-tags=count This option specifies the number of empty slots to leave in the .dynamic section of ELF shared objects. Empty slots may be needed by post processing tools, such as the prelinker. The default is 5. --split-by-file[=size] Similar to --split-by-reloc but creates a new output section for each input file when size is reached. size defaults to a size of 1 if not given. --split-by-reloc[=count] Tries to creates extra sections in the output file so that no single output section in the file contains more than count relocations. This is useful when generating huge relocatable files for downloading into certain real time kernels with the COFF object file format; since COFF cannot represent more than 65535 relocations in a single section. Note that this will fail to work with object file formats which do not support arbitrary sections. The linker will not split up individual input sections for redistribution, so if a single input section contains more than count relocations one output section will contain that many relocations. count defaults to a value of 32768. --stats Compute and display statistics about the operation of the linker, such as execution time and memory usage. --sysroot=directory Use directory as the location of the sysroot, overriding the configure-time default. This option is only supported by linkers that were configured using --with-sysroot. --task-link This is used by COFF/PE based targets to create a task-linked object file where all of the global symbols have been converted to statics. --traditional-format For some targets, the output of ld is different in some ways from the output of some existing linker. This switch requests ld to use the traditional format instead. For example, on SunOS, ld combines duplicate entries in the symbol string table. This can reduce the size of an output file with full debugging information by over 30 percent. Unfortunately, the SunOS "dbx" program can not read the resulting program ("gdb" has no trouble). The --traditional-format switch tells ld to not combine duplicate entries. --section-start=sectionname=org Locate a section in the output file at the absolute address given by org. You may use this option as many times as necessary to locate multiple sections in the command line. org must be a single hexadecimal integer; for compatibility with other linkers, you may omit the leading 0x usually associated with hexadecimal values. Note: there should be no white space between sectionname, the equals sign ("="), and org. -Tbss=org -Tdata=org -Ttext=org Same as --section-start, with ".bss", ".data" or ".text" as the sectionname. -Ttext-segment=org When creating an ELF executable, it will set the address of the first byte of the text segment. -Trodata-segment=org When creating an ELF executable or shared object for a target where the read-only data is in its own segment separate from the executable text, it will set the address of the first byte of the read-only data segment. -Tldata-segment=org When creating an ELF executable or shared object for x86-64 medium memory model, it will set the address of the first byte of the ldata segment. --unresolved-symbols=method Determine how to handle unresolved symbols. There are four possible values for method: ignore-all Do not report any unresolved symbols. report-all Report all unresolved symbols. This is the default. ignore-in-object-files Report unresolved symbols that are contained in shared libraries, but ignore them if they come from regular object files. ignore-in-shared-libs Report unresolved symbols that come from regular object files, but ignore them if they come from shared libraries. This can be useful when creating a dynamic binary and it is known that all the shared libraries that it should be referencing are included on the linker's command line. The behaviour for shared libraries on their own can also be controlled by the --[no-]allow-shlib-undefined option. Normally the linker will generate an error message for each reported unresolved symbol but the option --warn-unresolved-symbols can change this to a warning. --dll-verbose --verbose[=NUMBER] Display the version number for ld and list the linker emulations supported. Display which input files can and cannot be opened. Display the linker script being used by the linker. If the optional NUMBER argument > 1, plugin symbol status will also be displayed. --version-script=version-scriptfile Specify the name of a version script to the linker. This is typically used when creating shared libraries to specify additional information about the version hierarchy for the library being created. This option is only fully supported on ELF platforms which support shared libraries; see VERSION. It is partially supported on PE platforms, which can use version scripts to filter symbol visibility in auto-export mode: any symbols marked local in the version script will not be exported. --warn-common Warn when a common symbol is combined with another common symbol or with a symbol definition. Unix linkers allow this somewhat sloppy practice, but linkers on some other operating systems do not. This option allows you to find potential problems from combining global symbols. Unfortunately, some C libraries use this practice, so you may get some warnings about symbols in the libraries as well as in your programs. There are three kinds of global symbols, illustrated here by C examples: int i = 1; A definition, which goes in the initialized data section of the output file. extern int i; An undefined reference, which does not allocate space. There must be either a definition or a common symbol for the variable somewhere. int i; A common symbol. If there are only (one or more) common symbols for a variable, it goes in the uninitialized data area of the output file. The linker merges multiple common symbols for the same variable into a single symbol. If they are of different sizes, it picks the largest size. The linker turns a common symbol into a declaration, if there is a definition of the same variable. The --warn-common option can produce five kinds of warnings. Each warning consists of a pair of lines: the first describes the symbol just encountered, and the second describes the previous symbol encountered with the same name. One or both of the two symbols will be a common symbol. 1. Turning a common symbol into a reference, because there is already a definition for the symbol. <file>(<section>): warning: common of `<symbol>' overridden by definition <file>(<section>): warning: defined here 2. Turning a common symbol into a reference, because a later definition for the symbol is encountered. This is the same as the previous case, except that the symbols are encountered in a different order. <file>(<section>): warning: definition of `<symbol>' overriding common <file>(<section>): warning: common is here 3. Merging a common symbol with a previous same-sized common symbol. <file>(<section>): warning: multiple common of `<symbol>' <file>(<section>): warning: previous common is here 4. Merging a common symbol with a previous larger common symbol. <file>(<section>): warning: common of `<symbol>' overridden by larger common <file>(<section>): warning: larger common is here 5. Merging a common symbol with a previous smaller common symbol. This is the same as the previous case, except that the symbols are encountered in a different order. <file>(<section>): warning: common of `<symbol>' overriding smaller common <file>(<section>): warning: smaller common is here --warn-constructors Warn if any global constructors are used. This is only useful for a few object file formats. For formats like COFF or ELF, the linker can not detect the use of global constructors. --warn-execstack --no-warn-execstack On ELF platforms this option controls how the linker generates warning messages when it creates an output file with an executable stack. By default the linker will not warn if the -z execstack command line option has been used, but this behaviour can be overridden by the --warn-execstack option. On the other hand the linker will normally warn if the stack is made executable because one or more of the input files need an execuable stack and neither of the -z execstack or -z noexecstack command line options have been specified. This warning can be disabled via the --no-warn-execstack option. Note: ELF format input files specify that they need an executable stack by having a .note.GNU-stack section with the executable bit set in its section flags. They can specify that they do not need an executable stack by having that section, but without the executable flag bit set. If an input file does not have a .note.GNU-stack section present then the default behaviour is target specific. For some targets, then absence of such a section implies that an executable stack is required. This is often a problem for hand crafted assembler files. --warn-multiple-gp Warn if multiple global pointer values are required in the output file. This is only meaningful for certain processors, such as the Alpha. Specifically, some processors put large- valued constants in a special section. A special register (the global pointer) points into the middle of this section, so that constants can be loaded efficiently via a base- register relative addressing mode. Since the offset in base- register relative mode is fixed and relatively small (e.g., 16 bits), this limits the maximum size of the constant pool. Thus, in large programs, it is often necessary to use multiple global pointer values in order to be able to address all possible constants. This option causes a warning to be issued whenever this case occurs. --warn-once Only warn once for each undefined symbol, rather than once per module which refers to it. --warn-rwx-segments --no-warn-rwx-segments Warn if the linker creates a loadable, non-zero sized segment that has all three of the read, write and execute permission flags set. Such a segment represents a potential security vulnerability. In addition warnings will be generated if a thread local storage segment is created with the execute permission flag set, regardless of whether or not it has the read and/or write flags set. These warnings are enabled by default. They can be disabled via the --no-warn-rwx-segments option and re-enabled via the --warn-rwx-segments option. --warn-section-align Warn if the address of an output section is changed because of alignment. Typically, the alignment will be set by an input section. The address will only be changed if it not explicitly specified; that is, if the "SECTIONS" command does not specify a start address for the section. --warn-textrel Warn if the linker adds DT_TEXTREL to a position-independent executable or shared object. --warn-alternate-em Warn if an object has alternate ELF machine code. --warn-unresolved-symbols If the linker is going to report an unresolved symbol (see the option --unresolved-symbols) it will normally generate an error. This option makes it generate a warning instead. --error-unresolved-symbols This restores the linker's default behaviour of generating errors when it is reporting unresolved symbols. --whole-archive For each archive mentioned on the command line after the --whole-archive option, include every object file in the archive in the link, rather than searching the archive for the required object files. This is normally used to turn an archive file into a shared library, forcing every object to be included in the resulting shared library. This option may be used more than once. Two notes when using this option from gcc: First, gcc doesn't know about this option, so you have to use -Wl,-whole-archive. Second, don't forget to use -Wl,-no-whole-archive after your list of archives, because gcc will add its own list of archives to your link and you may not want this flag to affect those as well. --wrap=symbol Use a wrapper function for symbol. Any undefined reference to symbol will be resolved to "__wrap_symbol". Any undefined reference to "__real_symbol" will be resolved to symbol. This can be used to provide a wrapper for a system function. The wrapper function should be called "__wrap_symbol". If it wishes to call the system function, it should call "__real_symbol". Here is a trivial example: void * __wrap_malloc (size_t c) { printf ("malloc called with %zu\n", c); return __real_malloc (c); } If you link other code with this file using --wrap malloc, then all calls to "malloc" will call the function "__wrap_malloc" instead. The call to "__real_malloc" in "__wrap_malloc" will call the real "malloc" function. You may wish to provide a "__real_malloc" function as well, so that links without the --wrap option will succeed. If you do this, you should not put the definition of "__real_malloc" in the same file as "__wrap_malloc"; if you do, the assembler may resolve the call before the linker has a chance to wrap it to "malloc". Only undefined references are replaced by the linker. So, translation unit internal references to symbol are not resolved to "__wrap_symbol". In the next example, the call to "f" in "g" is not resolved to "__wrap_f". int f (void) { return 123; } int g (void) { return f(); } --eh-frame-hdr --no-eh-frame-hdr Request (--eh-frame-hdr) or suppress (--no-eh-frame-hdr) the creation of ".eh_frame_hdr" section and ELF "PT_GNU_EH_FRAME" segment header. --no-ld-generated-unwind-info Request creation of ".eh_frame" unwind info for linker generated code sections like PLT. This option is on by default if linker generated unwind info is supported. This option also controls the generation of ".sframe" unwind info for linker generated code sections like PLT. --enable-new-dtags --disable-new-dtags This linker can create the new dynamic tags in ELF. But the older ELF systems may not understand them. If you specify --enable-new-dtags, the new dynamic tags will be created as needed and older dynamic tags will be omitted. If you specify --disable-new-dtags, no new dynamic tags will be created. By default, the new dynamic tags are not created. Note that those options are only available for ELF systems. --hash-size=number Set the default size of the linker's hash tables to a prime number close to number. Increasing this value can reduce the length of time it takes the linker to perform its tasks, at the expense of increasing the linker's memory requirements. Similarly reducing this value can reduce the memory requirements at the expense of speed. --hash-style=style Set the type of linker's hash table(s). style can be either "sysv" for classic ELF ".hash" section, "gnu" for new style GNU ".gnu.hash" section or "both" for both the classic ELF ".hash" and new style GNU ".gnu.hash" hash tables. The default depends upon how the linker was configured, but for most Linux based systems it will be "both". --compress-debug-sections=none --compress-debug-sections=zlib --compress-debug-sections=zlib-gnu --compress-debug-sections=zlib-gabi --compress-debug-sections=zstd On ELF platforms, these options control how DWARF debug sections are compressed using zlib. --compress-debug-sections=none doesn't compress DWARF debug sections. --compress-debug-sections=zlib-gnu compresses DWARF debug sections and renames them to begin with .zdebug instead of .debug. --compress-debug-sections=zlib-gabi also compresses DWARF debug sections, but rather than renaming them it sets the SHF_COMPRESSED flag in the sections' headers. The --compress-debug-sections=zlib option is an alias for --compress-debug-sections=zlib-gabi. --compress-debug-sections=zstd compresses DWARF debug sections using zstd. Note that this option overrides any compression in input debug sections, so if a binary is linked with --compress-debug-sections=none for example, then any compressed debug sections in input files will be uncompressed before they are copied into the output binary. The default compression behaviour varies depending upon the target involved and the configure options used to build the toolchain. The default can be determined by examining the output from the linker's --help option. --reduce-memory-overheads This option reduces memory requirements at ld runtime, at the expense of linking speed. This was introduced to select the old O(n^2) algorithm for link map file generation, rather than the new O(n) algorithm which uses about 40% more memory for symbol storage. Another effect of the switch is to set the default hash table size to 1021, which again saves memory at the cost of lengthening the linker's run time. This is not done however if the --hash-size switch has been used. The --reduce-memory-overheads switch may be also be used to enable other tradeoffs in future versions of the linker. --max-cache-size=size ld normally caches the relocation information and symbol tables of input files in memory with the unlimited size. This option sets the maximum cache size to size. --build-id --build-id=style Request the creation of a ".note.gnu.build-id" ELF note section or a ".buildid" COFF section. The contents of the note are unique bits identifying this linked file. style can be "uuid" to use 128 random bits, "sha1" to use a 160-bit SHA1 hash on the normative parts of the output contents, "md5" to use a 128-bit MD5 hash on the normative parts of the output contents, or "0xhexstring" to use a chosen bit string specified as an even number of hexadecimal digits ("-" and ":" characters between digit pairs are ignored). If style is omitted, "sha1" is used. The "md5" and "sha1" styles produces an identifier that is always the same in an identical output file, but will be unique among all nonidentical output files. It is not intended to be compared as a checksum for the file's contents. A linked file may be changed later by other tools, but the build ID bit string identifying the original linked file does not change. Passing "none" for style disables the setting from any "--build-id" options earlier on the command line. --package-metadata=JSON Request the creation of a ".note.package" ELF note section. The contents of the note are in JSON format, as per the package metadata specification. For more information see: https://systemd.io/ELF_PACKAGE_METADATA/ If the JSON argument is missing/empty then this will disable the creation of the metadata note, if one had been enabled by an earlier occurrence of the --package-metdata option. If the linker has been built with libjansson, then the JSON string will be validated. The i386 PE linker supports the -shared option, which causes the output to be a dynamically linked library (DLL) instead of a normal executable. You should name the output "*.dll" when you use this option. In addition, the linker fully supports the standard "*.def" files, which may be specified on the linker command line like an object file (in fact, it should precede archives it exports symbols from, to ensure that they get linked in, just like a normal object file). In addition to the options common to all targets, the i386 PE linker support additional command-line options that are specific to the i386 PE target. Options that take values may be separated from their values by either a space or an equals sign. --add-stdcall-alias If given, symbols with a stdcall suffix (@nn) will be exported as-is and also with the suffix stripped. [This option is specific to the i386 PE targeted port of the linker] --base-file file Use file as the name of a file in which to save the base addresses of all the relocations needed for generating DLLs with dlltool. [This is an i386 PE specific option] --dll Create a DLL instead of a regular executable. You may also use -shared or specify a "LIBRARY" in a given ".def" file. [This option is specific to the i386 PE targeted port of the linker] --enable-long-section-names --disable-long-section-names The PE variants of the COFF object format add an extension that permits the use of section names longer than eight characters, the normal limit for COFF. By default, these names are only allowed in object files, as fully-linked executable images do not carry the COFF string table required to support the longer names. As a GNU extension, it is possible to allow their use in executable images as well, or to (probably pointlessly!) disallow it in object files, by using these two options. Executable images generated with these long section names are slightly non-standard, carrying as they do a string table, and may generate confusing output when examined with non-GNU PE-aware tools, such as file viewers and dumpers. However, GDB relies on the use of PE long section names to find Dwarf-2 debug information sections in an executable image at runtime, and so if neither option is specified on the command-line, ld will enable long section names, overriding the default and technically correct behaviour, when it finds the presence of debug information while linking an executable image and not stripping symbols. [This option is valid for all PE targeted ports of the linker] --enable-stdcall-fixup --disable-stdcall-fixup If the link finds a symbol that it cannot resolve, it will attempt to do "fuzzy linking" by looking for another defined symbol that differs only in the format of the symbol name (cdecl vs stdcall) and will resolve that symbol by linking to the match. For example, the undefined symbol "_foo" might be linked to the function "_foo@12", or the undefined symbol "_bar@16" might be linked to the function "_bar". When the linker does this, it prints a warning, since it normally should have failed to link, but sometimes import libraries generated from third-party dlls may need this feature to be usable. If you specify --enable-stdcall-fixup, this feature is fully enabled and warnings are not printed. If you specify --disable-stdcall-fixup, this feature is disabled and such mismatches are considered to be errors. [This option is specific to the i386 PE targeted port of the linker] --leading-underscore --no-leading-underscore For most targets default symbol-prefix is an underscore and is defined in target's description. By this option it is possible to disable/enable the default underscore symbol- prefix. --export-all-symbols If given, all global symbols in the objects used to build a DLL will be exported by the DLL. Note that this is the default if there otherwise wouldn't be any exported symbols. When symbols are explicitly exported via DEF files or implicitly exported via function attributes, the default is to not export anything else unless this option is given. Note that the symbols "DllMain@12", "DllEntryPoint@0", "DllMainCRTStartup@12", and "impure_ptr" will not be automatically exported. Also, symbols imported from other DLLs will not be re-exported, nor will symbols specifying the DLL's internal layout such as those beginning with "_head_" or ending with "_iname". In addition, no symbols from "libgcc", "libstd++", "libmingw32", or "crtX.o" will be exported. Symbols whose names begin with "__rtti_" or "__builtin_" will not be exported, to help with C++ DLLs. Finally, there is an extensive list of cygwin-private symbols that are not exported (obviously, this applies on when building DLLs for cygwin targets). These cygwin-excludes are: "_cygwin_dll_entry@12", "_cygwin_crt0_common@8", "_cygwin_noncygwin_dll_entry@12", "_fmode", "_impure_ptr", "cygwin_attach_dll", "cygwin_premain0", "cygwin_premain1", "cygwin_premain2", "cygwin_premain3", and "environ". [This option is specific to the i386 PE targeted port of the linker] --exclude-symbols symbol,symbol,... Specifies a list of symbols which should not be automatically exported. The symbol names may be delimited by commas or colons. [This option is specific to the i386 PE targeted port of the linker] --exclude-all-symbols Specifies no symbols should be automatically exported. [This option is specific to the i386 PE targeted port of the linker] --file-alignment Specify the file alignment. Sections in the file will always begin at file offsets which are multiples of this number. This defaults to 512. [This option is specific to the i386 PE targeted port of the linker] --heap reserve --heap reserve,commit Specify the number of bytes of memory to reserve (and optionally commit) to be used as heap for this program. The default is 1MB reserved, 4K committed. [This option is specific to the i386 PE targeted port of the linker] --image-base value Use value as the base address of your program or dll. This is the lowest memory location that will be used when your program or dll is loaded. To reduce the need to relocate and improve performance of your dlls, each should have a unique base address and not overlap any other dlls. The default is 0x400000 for executables, and 0x10000000 for dlls. [This option is specific to the i386 PE targeted port of the linker] --kill-at If given, the stdcall suffixes (@nn) will be stripped from symbols before they are exported. [This option is specific to the i386 PE targeted port of the linker] --large-address-aware If given, the appropriate bit in the "Characteristics" field of the COFF header is set to indicate that this executable supports virtual addresses greater than 2 gigabytes. This should be used in conjunction with the /3GB or /USERVA=value megabytes switch in the "[operating systems]" section of the BOOT.INI. Otherwise, this bit has no effect. [This option is specific to PE targeted ports of the linker] --disable-large-address-aware Reverts the effect of a previous --large-address-aware option. This is useful if --large-address-aware is always set by the compiler driver (e.g. Cygwin gcc) and the executable does not support virtual addresses greater than 2 gigabytes. [This option is specific to PE targeted ports of the linker] --major-image-version value Sets the major number of the "image version". Defaults to 1. [This option is specific to the i386 PE targeted port of the linker] --major-os-version value Sets the major number of the "os version". Defaults to 4. [This option is specific to the i386 PE targeted port of the linker] --major-subsystem-version value Sets the major number of the "subsystem version". Defaults to 4. [This option is specific to the i386 PE targeted port of the linker] --minor-image-version value Sets the minor number of the "image version". Defaults to 0. [This option is specific to the i386 PE targeted port of the linker] --minor-os-version value Sets the minor number of the "os version". Defaults to 0. [This option is specific to the i386 PE targeted port of the linker] --minor-subsystem-version value Sets the minor number of the "subsystem version". Defaults to 0. [This option is specific to the i386 PE targeted port of the linker] --output-def file The linker will create the file file which will contain a DEF file corresponding to the DLL the linker is generating. This DEF file (which should be called "*.def") may be used to create an import library with "dlltool" or may be used as a reference to automatically or implicitly exported symbols. [This option is specific to the i386 PE targeted port of the linker] --enable-auto-image-base --enable-auto-image-base=value Automatically choose the image base for DLLs, optionally starting with base value, unless one is specified using the "--image-base" argument. By using a hash generated from the dllname to create unique image bases for each DLL, in-memory collisions and relocations which can delay program execution are avoided. [This option is specific to the i386 PE targeted port of the linker] --disable-auto-image-base Do not automatically generate a unique image base. If there is no user-specified image base ("--image-base") then use the platform default. [This option is specific to the i386 PE targeted port of the linker] --dll-search-prefix string When linking dynamically to a dll without an import library, search for "<string><basename>.dll" in preference to "lib<basename>.dll". This behaviour allows easy distinction between DLLs built for the various "subplatforms": native, cygwin, uwin, pw, etc. For instance, cygwin DLLs typically use "--dll-search-prefix=cyg". [This option is specific to the i386 PE targeted port of the linker] --enable-auto-import Do sophisticated linking of "_symbol" to "__imp__symbol" for DATA imports from DLLs, thus making it possible to bypass the dllimport mechanism on the user side and to reference unmangled symbol names. [This option is specific to the i386 PE targeted port of the linker] The following remarks pertain to the original implementation of the feature and are obsolete nowadays for Cygwin and MinGW targets. Note: Use of the 'auto-import' extension will cause the text section of the image file to be made writable. This does not conform to the PE-COFF format specification published by Microsoft. Note - use of the 'auto-import' extension will also cause read only data which would normally be placed into the .rdata section to be placed into the .data section instead. This is in order to work around a problem with consts that is described here: http://www.cygwin.com/ml/cygwin/2004-09/msg01101.html Using 'auto-import' generally will 'just work' -- but sometimes you may see this message: "variable '<var>' can't be auto-imported. Please read the documentation for ld's "--enable-auto-import" for details." This message occurs when some (sub)expression accesses an address ultimately given by the sum of two constants (Win32 import tables only allow one). Instances where this may occur include accesses to member fields of struct variables imported from a DLL, as well as using a constant index into an array variable imported from a DLL. Any multiword variable (arrays, structs, long long, etc) may trigger this error condition. However, regardless of the exact data type of the offending exported variable, ld will always detect it, issue the warning, and exit. There are several ways to address this difficulty, regardless of the data type of the exported variable: One way is to use --enable-runtime-pseudo-reloc switch. This leaves the task of adjusting references in your client code for runtime environment, so this method works only when runtime environment supports this feature. A second solution is to force one of the 'constants' to be a variable -- that is, unknown and un-optimizable at compile time. For arrays, there are two possibilities: a) make the indexee (the array's address) a variable, or b) make the 'constant' index a variable. Thus: extern type extern_array[]; extern_array[1] --> { volatile type *t=extern_array; t[1] } or extern type extern_array[]; extern_array[1] --> { volatile int t=1; extern_array[t] } For structs (and most other multiword data types) the only option is to make the struct itself (or the long long, or the ...) variable: extern struct s extern_struct; extern_struct.field --> { volatile struct s *t=&extern_struct; t->field } or extern long long extern_ll; extern_ll --> { volatile long long * local_ll=&extern_ll; *local_ll } A third method of dealing with this difficulty is to abandon 'auto-import' for the offending symbol and mark it with "__declspec(dllimport)". However, in practice that requires using compile-time #defines to indicate whether you are building a DLL, building client code that will link to the DLL, or merely building/linking to a static library. In making the choice between the various methods of resolving the 'direct address with constant offset' problem, you should consider typical real-world usage: Original: --foo.h extern int arr[]; --foo.c #include "foo.h" void main(int argc, char **argv){ printf("%d\n",arr[1]); } Solution 1: --foo.h extern int arr[]; --foo.c #include "foo.h" void main(int argc, char **argv){ /* This workaround is for win32 and cygwin; do not "optimize" */ volatile int *parr = arr; printf("%d\n",parr[1]); } Solution 2: --foo.h /* Note: auto-export is assumed (no __declspec(dllexport)) */ #if (defined(_WIN32) || defined(__CYGWIN__)) && \ !(defined(FOO_BUILD_DLL) || defined(FOO_STATIC)) #define FOO_IMPORT __declspec(dllimport) #else #define FOO_IMPORT #endif extern FOO_IMPORT int arr[]; --foo.c #include "foo.h" void main(int argc, char **argv){ printf("%d\n",arr[1]); } A fourth way to avoid this problem is to re-code your library to use a functional interface rather than a data interface for the offending variables (e.g. set_foo() and get_foo() accessor functions). --disable-auto-import Do not attempt to do sophisticated linking of "_symbol" to "__imp__symbol" for DATA imports from DLLs. [This option is specific to the i386 PE targeted port of the linker] --enable-runtime-pseudo-reloc If your code contains expressions described in --enable-auto-import section, that is, DATA imports from DLL with non-zero offset, this switch will create a vector of 'runtime pseudo relocations' which can be used by runtime environment to adjust references to such data in your client code. [This option is specific to the i386 PE targeted port of the linker] --disable-runtime-pseudo-reloc Do not create pseudo relocations for non-zero offset DATA imports from DLLs. [This option is specific to the i386 PE targeted port of the linker] --enable-extra-pe-debug Show additional debug info related to auto-import symbol thunking. [This option is specific to the i386 PE targeted port of the linker] --section-alignment Sets the section alignment. Sections in memory will always begin at addresses which are a multiple of this number. Defaults to 0x1000. [This option is specific to the i386 PE targeted port of the linker] --stack reserve --stack reserve,commit Specify the number of bytes of memory to reserve (and optionally commit) to be used as stack for this program. The default is 2MB reserved, 4K committed. [This option is specific to the i386 PE targeted port of the linker] --subsystem which --subsystem which:major --subsystem which:major.minor Specifies the subsystem under which your program will execute. The legal values for which are "native", "windows", "console", "posix", and "xbox". You may optionally set the subsystem version also. Numeric values are also accepted for which. [This option is specific to the i386 PE targeted port of the linker] The following options set flags in the "DllCharacteristics" field of the PE file header: [These options are specific to PE targeted ports of the linker] --high-entropy-va --disable-high-entropy-va Image is compatible with 64-bit address space layout randomization (ASLR). This option is enabled by default for 64-bit PE images. This option also implies --dynamicbase and --enable-reloc-section. --dynamicbase --disable-dynamicbase The image base address may be relocated using address space layout randomization (ASLR). This feature was introduced with MS Windows Vista for i386 PE targets. This option is enabled by default but can be disabled via the --disable-dynamicbase option. This option also implies --enable-reloc-section. --forceinteg --disable-forceinteg Code integrity checks are enforced. This option is disabled by default. --nxcompat --disable-nxcompat The image is compatible with the Data Execution Prevention. This feature was introduced with MS Windows XP SP2 for i386 PE targets. The option is enabled by default. --no-isolation --disable-no-isolation Although the image understands isolation, do not isolate the image. This option is disabled by default. --no-seh --disable-no-seh The image does not use SEH. No SE handler may be called from this image. This option is disabled by default. --no-bind --disable-no-bind Do not bind this image. This option is disabled by default. --wdmdriver --disable-wdmdriver The driver uses the MS Windows Driver Model. This option is disabled by default. --tsaware --disable-tsaware The image is Terminal Server aware. This option is disabled by default. --insert-timestamp --no-insert-timestamp Insert a real timestamp into the image. This is the default behaviour as it matches legacy code and it means that the image will work with other, proprietary tools. The problem with this default is that it will result in slightly different images being produced each time the same sources are linked. The option --no-insert-timestamp can be used to insert a zero value for the timestamp, this ensuring that binaries produced from identical sources will compare identically. --enable-reloc-section --disable-reloc-section Create the base relocation table, which is necessary if the image is loaded at a different image base than specified in the PE header. This option is enabled by default. The C6X uClinux target uses a binary format called DSBT to support shared libraries. Each shared library in the system needs to have a unique index; all executables use an index of 0. --dsbt-size size This option sets the number of entries in the DSBT of the current executable or shared library to size. The default is to create a table with 64 entries. --dsbt-index index This option sets the DSBT index of the current executable or shared library to index. The default is 0, which is appropriate for generating executables. If a shared library is generated with a DSBT index of 0, the "R_C6000_DSBT_INDEX" relocs are copied into the output file. The --no-merge-exidx-entries switch disables the merging of adjacent exidx entries in frame unwind info. --branch-stub This option enables linker branch relaxation by inserting branch stub sections when needed to extend the range of branches. This option is usually not required since C-SKY supports branch and call instructions that can access the full memory range and branch relaxation is normally handled by the compiler or assembler. --stub-group-size=N This option allows finer control of linker branch stub creation. It sets the maximum size of a group of input sections that can be handled by one stub section. A negative value of N locates stub sections after their branches, while a positive value allows stub sections to appear either before or after the branches. Values of 1 or -1 indicate that the linker should choose suitable defaults. The 68HC11 and 68HC12 linkers support specific options to control the memory bank switching mapping and trampoline code generation. --no-trampoline This option disables the generation of trampoline. By default a trampoline is generated for each far function which is called using a "jsr" instruction (this happens when a pointer to a far function is taken). --bank-window name This option indicates to the linker the name of the memory region in the MEMORY specification that describes the memory bank window. The definition of such region is then used by the linker to compute paging and addresses within the memory window. The following options are supported to control handling of GOT generation when linking for 68K targets. --got=type This option tells the linker which GOT generation scheme to use. type should be one of single, negative, multigot or target. For more information refer to the Info entry for ld. The following options are supported to control microMIPS instruction generation and branch relocation checks for ISA mode transitions when linking for MIPS targets. --insn32 --no-insn32 These options control the choice of microMIPS instructions used in code generated by the linker, such as that in the PLT or lazy binding stubs, or in relaxation. If --insn32 is used, then the linker only uses 32-bit instruction encodings. By default or if --no-insn32 is used, all instruction encodings are used, including 16-bit ones where possible. --ignore-branch-isa --no-ignore-branch-isa These options control branch relocation checks for invalid ISA mode transitions. If --ignore-branch-isa is used, then the linker accepts any branch relocations and any ISA mode transition required is lost in relocation calculation, except for some cases of "BAL" instructions which meet relaxation conditions and are converted to equivalent "JALX" instructions as the associated relocation is calculated. By default or if --no-ignore-branch-isa is used a check is made causing the loss of an ISA mode transition to produce an error. --compact-branches --no-compact-branches These options control the generation of compact instructions by the linker in the PLT entries for MIPS R6. For the pdp11-aout target, three variants of the output format can be produced as selected by the following options. The default variant for pdp11-aout is the --omagic option, whereas for other targets --nmagic is the default. The --imagic option is defined only for the pdp11-aout target, while the others are described here as they apply to the pdp11-aout target. -N --omagic Mark the output as "OMAGIC" (0407) in the a.out header to indicate that the text segment is not to be write-protected and shared. Since the text and data sections are both readable and writable, the data section is allocated immediately contiguous after the text segment. This is the oldest format for PDP11 executable programs and is the default for ld on PDP11 Unix systems from the beginning through 2.11BSD. -n --nmagic Mark the output as "NMAGIC" (0410) in the a.out header to indicate that when the output file is executed, the text portion will be read-only and shareable among all processes executing the same file. This involves moving the data areas up to the first possible 8K byte page boundary following the end of the text. This option creates a pure executable format. -z --imagic Mark the output as "IMAGIC" (0411) in the a.out header to indicate that when the output file is executed, the program text and data areas will be loaded into separate address spaces using the split instruction and data space feature of the memory management unit in larger models of the PDP11. This doubles the address space available to the program. The text segment is again pure, write-protected, and shareable. The only difference in the output format between this option and the others, besides the magic number, is that both the text and data sections start at location 0. The -z option selected this format in 2.11BSD. This option creates a separate executable format. --no-omagic Equivalent to --nmagic for pdp11-aout.
# ld > Link object files together. More information: > https://sourceware.org/binutils/docs-2.38/ld.html. * Link a specific object file with no dependencies into an executable: `ld {{path/to/file.o}} --output {{path/to/output_executable}}` * Link two object files together: `ld {{path/to/file1.o}} {{path/to/file2.o}} --output {{path/to/output_executable}}` * Dynamically link an x86_64 program to glibc (file paths change depending on the system): `ld --output {{path/to/output_executable}} --dynamic-linker /lib/ld- linux-x86-64.so.2 /lib/crt1.o /lib/crti.o -lc {{path/to/file.o}} /lib/crtn.o`
git-commit
Create a new commit containing the current contents of the index and the given log message describing the changes. The new commit is a direct child of HEAD, usually the tip of the current branch, and the branch is updated to point to it (unless no branch is associated with the working tree, in which case HEAD is "detached" as described in git-checkout(1)). The content to be committed can be specified in several ways: 1. by using git-add(1) to incrementally "add" changes to the index before using the commit command (Note: even modified files must be "added"); 2. by using git-rm(1) to remove files from the working tree and the index, again before using the commit command; 3. by listing files as arguments to the commit command (without --interactive or --patch switch), in which case the commit will ignore changes staged in the index, and instead record the current content of the listed files (which must already be known to Git); 4. by using the -a switch with the commit command to automatically "add" changes from all known files (i.e. all files that are already listed in the index) and to automatically "rm" files in the index that have been removed from the working tree, and then perform the actual commit; 5. by using the --interactive or --patch switches with the commit command to decide one by one which files or hunks should be part of the commit in addition to contents in the index, before finalizing the operation. See the “Interactive Mode” section of git-add(1) to learn how to operate these modes. The --dry-run option can be used to obtain a summary of what is included by any of the above for the next commit by giving the same set of parameters (options and paths). If you make a commit and then find a mistake immediately after that, you can recover from it with git reset. -a, --all Tell the command to automatically stage files that have been modified and deleted, but new files you have not told Git about are not affected. -p, --patch Use the interactive patch selection interface to choose which changes to commit. See git-add(1) for details. -C <commit>, --reuse-message=<commit> Take an existing commit object, and reuse the log message and the authorship information (including the timestamp) when creating the commit. -c <commit>, --reedit-message=<commit> Like -C, but with -c the editor is invoked, so that the user can further edit the commit message. --fixup=[(amend|reword):]<commit> Create a new commit which "fixes up" <commit> when applied with git rebase --autosquash. Plain --fixup=<commit> creates a "fixup!" commit which changes the content of <commit> but leaves its log message untouched. --fixup=amend:<commit> is similar but creates an "amend!" commit which also replaces the log message of <commit> with the log message of the "amend!" commit. --fixup=reword:<commit> creates an "amend!" commit which replaces the log message of <commit> with its own log message but makes no changes to the content of <commit>. The commit created by plain --fixup=<commit> has a subject composed of "fixup!" followed by the subject line from <commit>, and is recognized specially by git rebase --autosquash. The -m option may be used to supplement the log message of the created commit, but the additional commentary will be thrown away once the "fixup!" commit is squashed into <commit> by git rebase --autosquash. The commit created by --fixup=amend:<commit> is similar but its subject is instead prefixed with "amend!". The log message of <commit> is copied into the log message of the "amend!" commit and opened in an editor so it can be refined. When git rebase --autosquash squashes the "amend!" commit into <commit>, the log message of <commit> is replaced by the refined log message from the "amend!" commit. It is an error for the "amend!" commit’s log message to be empty unless --allow-empty-message is specified. --fixup=reword:<commit> is shorthand for --fixup=amend:<commit> --only. It creates an "amend!" commit with only a log message (ignoring any changes staged in the index). When squashed by git rebase --autosquash, it replaces the log message of <commit> without making any other changes. Neither "fixup!" nor "amend!" commits change authorship of <commit> when applied by git rebase --autosquash. See git-rebase(1) for details. --squash=<commit> Construct a commit message for use with rebase --autosquash. The commit message subject line is taken from the specified commit with a prefix of "squash! ". Can be used with additional commit message options (-m/-c/-C/-F). See git-rebase(1) for details. --reset-author When used with -C/-c/--amend options, or when committing after a conflicting cherry-pick, declare that the authorship of the resulting commit now belongs to the committer. This also renews the author timestamp. --short When doing a dry-run, give the output in the short-format. See git-status(1) for details. Implies --dry-run. --branch Show the branch and tracking info even in short-format. --porcelain When doing a dry-run, give the output in a porcelain-ready format. See git-status(1) for details. Implies --dry-run. --long When doing a dry-run, give the output in the long-format. Implies --dry-run. -z, --null When showing short or porcelain status output, print the filename verbatim and terminate the entries with NUL, instead of LF. If no format is given, implies the --porcelain output format. Without the -z option, filenames with "unusual" characters are quoted as explained for the configuration variable core.quotePath (see git-config(1)). -F <file>, --file=<file> Take the commit message from the given file. Use - to read the message from the standard input. --author=<author> Override the commit author. Specify an explicit author using the standard A U Thor <[email protected]> format. Otherwise <author> is assumed to be a pattern and is used to search for an existing commit by that author (i.e. rev-list --all -i --author=<author>); the commit author is then copied from the first such commit found. --date=<date> Override the author date used in the commit. -m <msg>, --message=<msg> Use the given <msg> as the commit message. If multiple -m options are given, their values are concatenated as separate paragraphs. The -m option is mutually exclusive with -c, -C, and -F. -t <file>, --template=<file> When editing the commit message, start the editor with the contents in the given file. The commit.template configuration variable is often used to give this option implicitly to the command. This mechanism can be used by projects that want to guide participants with some hints on what to write in the message in what order. If the user exits the editor without editing the message, the commit is aborted. This has no effect when a message is given by other means, e.g. with the -m or -F options. -s, --signoff, --no-signoff Add a Signed-off-by trailer by the committer at the end of the commit log message. The meaning of a signoff depends on the project to which you’re committing. For example, it may certify that the committer has the rights to submit the work under the project’s license or agrees to some contributor representation, such as a Developer Certificate of Origin. (See http://developercertificate.org for the one used by the Linux kernel and Git projects.) Consult the documentation or leadership of the project to which you’re contributing to understand how the signoffs are used in that project. The --no-signoff option can be used to countermand an earlier --signoff option on the command line. --trailer <token>[(=|:)<value>] Specify a (<token>, <value>) pair that should be applied as a trailer. (e.g. git commit --trailer "Signed-off-by:C O Mitter \ <[email protected]>" --trailer "Helped-by:C O Mitter \ <[email protected]>" will add the "Signed-off-by" trailer and the "Helped-by" trailer to the commit message.) The trailer.* configuration variables (‐ git-interpret-trailers(1)) can be used to define if a duplicated trailer is omitted, where in the run of trailers each trailer would appear, and other details. -n, --[no-]verify By default, the pre-commit and commit-msg hooks are run. When any of --no-verify or -n is given, these are bypassed. See also githooks(5). --allow-empty Usually recording a commit that has the exact same tree as its sole parent commit is a mistake, and the command prevents you from making such a commit. This option bypasses the safety, and is primarily for use by foreign SCM interface scripts. --allow-empty-message Like --allow-empty this command is primarily for use by foreign SCM interface scripts. It allows you to create a commit with an empty commit message without using plumbing commands like git-commit-tree(1). --cleanup=<mode> This option determines how the supplied commit message should be cleaned up before committing. The <mode> can be strip, whitespace, verbatim, scissors or default. strip Strip leading and trailing empty lines, trailing whitespace, commentary and collapse consecutive empty lines. whitespace Same as strip except #commentary is not removed. verbatim Do not change the message at all. scissors Same as whitespace except that everything from (and including) the line found below is truncated, if the message is to be edited. "#" can be customized with core.commentChar. # ------------------------ >8 ------------------------ default Same as strip if the message is to be edited. Otherwise whitespace. The default can be changed by the commit.cleanup configuration variable (see git-config(1)). -e, --edit The message taken from file with -F, command line with -m, and from commit object with -C are usually used as the commit log message unmodified. This option lets you further edit the message taken from these sources. --no-edit Use the selected commit message without launching an editor. For example, git commit --amend --no-edit amends a commit without changing its commit message. --amend Replace the tip of the current branch by creating a new commit. The recorded tree is prepared as usual (including the effect of the -i and -o options and explicit pathspec), and the message from the original commit is used as the starting point, instead of an empty message, when no other message is specified from the command line via options such as -m, -F, -c, etc. The new commit has the same parents and author as the current one (the --reset-author option can countermand this). It is a rough equivalent for: $ git reset --soft HEAD^ $ ... do something else to come up with the right tree ... $ git commit -c ORIG_HEAD but can be used to amend a merge commit. You should understand the implications of rewriting history if you amend a commit that has already been published. (See the "RECOVERING FROM UPSTREAM REBASE" section in git-rebase(1).) --no-post-rewrite Bypass the post-rewrite hook. -i, --include Before making a commit out of staged contents so far, stage the contents of paths given on the command line as well. This is usually not what you want unless you are concluding a conflicted merge. -o, --only Make a commit by taking the updated working tree contents of the paths specified on the command line, disregarding any contents that have been staged for other paths. This is the default mode of operation of git commit if any paths are given on the command line, in which case this option can be omitted. If this option is specified together with --amend, then no paths need to be specified, which can be used to amend the last commit without committing changes that have already been staged. If used together with --allow-empty paths are also not required, and an empty commit will be created. --pathspec-from-file=<file> Pathspec is passed in <file> instead of commandline args. If <file> is exactly - then standard input is used. Pathspec elements are separated by LF or CR/LF. Pathspec elements can be quoted as explained for the configuration variable core.quotePath (see git-config(1)). See also --pathspec-file-nul and global --literal-pathspecs. --pathspec-file-nul Only meaningful with --pathspec-from-file. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes). -u[<mode>], --untracked-files[=<mode>] Show untracked files. The mode parameter is optional (defaults to all), and is used to specify the handling of untracked files; when -u is not used, the default is normal, i.e. show untracked files and directories. The possible options are: • no - Show no untracked files • normal - Shows untracked files and directories • all - Also shows individual files in untracked directories. The default can be changed using the status.showUntrackedFiles configuration variable documented in git-config(1). -v, --verbose Show unified diff between the HEAD commit and what would be committed at the bottom of the commit message template to help the user describe the commit by reminding what changes the commit has. Note that this diff output doesn’t have its lines prefixed with #. This diff will not be a part of the commit message. See the commit.verbose configuration variable in git-config(1). If specified twice, show in addition the unified diff between what would be committed and the worktree files, i.e. the unstaged changes to tracked files. -q, --quiet Suppress commit summary message. --dry-run Do not create a commit, but show a list of paths that are to be committed, paths with local changes that will be left uncommitted and paths that are untracked. --status Include the output of git-status(1) in the commit message template when using an editor to prepare the commit message. Defaults to on, but can be used to override configuration variable commit.status. --no-status Do not include the output of git-status(1) in the commit message template when using an editor to prepare the default commit message. -S[<keyid>], --gpg-sign[=<keyid>], --no-gpg-sign GPG-sign commits. The keyid argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. --no-gpg-sign is useful to countermand both commit.gpgSign configuration variable, and earlier --gpg-sign. -- Do not interpret any more arguments as options. <pathspec>... When pathspec is given on the command line, commit the contents of the files that match the pathspec without recording the changes already added to the index. The contents of these files are also staged for the next commit on top of what have been staged before. For more details, see the pathspec entry in gitglossary(7).
# git commit > Commit files to the repository. More information: https://git- > scm.com/docs/git-commit. * Commit staged files to the repository with a message: `git commit --message "{{message}}"` * Commit staged files with a message read from a file: `git commit --file {{path/to/commit_message_file}}` * Auto stage all modified and deleted files and commit with a message: `git commit --all --message "{{message}}"` * Commit staged files and sign them with the specified GPG key (or the one defined in the config file if no argument is specified): `git commit --gpg-sign {{key_id}} --message "{{message}}"` * Update the last commit by adding the currently staged changes, changing the commit's hash: `git commit --amend` * Commit only specific (already staged) files: `git commit {{path/to/file1}} {{path/to/file2}}` * Create a commit, even if there are no staged files: `git commit --message "{{message}}" --allow-empty`
xargs
The xargs utility shall construct a command line consisting of the utility and argument operands specified followed by as many arguments read in sequence from standard input as fit in length and number constraints specified by the options. The xargs utility shall then invoke the constructed command line and wait for its completion. This sequence shall be repeated until one of the following occurs: * An end-of-file condition is detected on standard input. * An argument consisting of just the logical end-of-file string (see the -E eofstr option) is found on standard input after double-quote processing, <apostrophe> processing, and <backslash>-escape processing (see next paragraph). All arguments up to but not including the argument consisting of just the logical end-of-file string shall be used as arguments in constructed command lines. * An invocation of a constructed command line returns an exit status of 255. The application shall ensure that arguments in the standard input are separated by unquoted <blank> characters, unescaped <blank> characters, or <newline> characters. A string of zero or more non-double-quote ('"') characters and non-<newline> characters can be quoted by enclosing them in double-quotes. A string of zero or more non-<apostrophe> ('\'') characters and non-<newline> characters can be quoted by enclosing them in <apostrophe> characters. Any unquoted character can be escaped by preceding it with a <backslash>. The utility named by utility shall be executed one or more times until the end-of-file is reached or the logical end-of file string is found. The results are unspecified if the utility named by utility attempts to read from its standard input. The generated command line length shall be the sum of the size in bytes of the utility name and each argument treated as strings, including a null byte terminator for each of these strings. The xargs utility shall limit the command line length such that when the command line is invoked, the combined argument and environment lists (see the exec family of functions in the System Interfaces volume of POSIX.1‐2017) shall not exceed {ARG_MAX}-2048 bytes. Within this constraint, if neither the -n nor the -s option is specified, the default command line length shall be at least {LINE_MAX}. The xargs utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported: -E eofstr Use eofstr as the logical end-of-file string. If -E is not specified, it is unspecified whether the logical end-of-file string is the <underscore> character ('_') or the end-of-file string capability is disabled. When eofstr is the null string, the logical end-of-file string capability shall be disabled and <underscore> characters shall be taken literally. -I replstr Insert mode: utility is executed for each logical line from standard input. Arguments in the standard input shall be separated only by unescaped <newline> characters, not by <blank> characters. Any unquoted unescaped <blank> characters at the beginning of each line shall be ignored. The resulting argument shall be inserted in arguments in place of each occurrence of replstr. At least five arguments in arguments can each contain one or more instances of replstr. Each of these constructed arguments cannot grow larger than an implementation-defined limit greater than or equal to 255 bytes. Option -x shall be forced on. -L number The utility shall be executed for each non-empty number lines of arguments from standard input. The last invocation of utility shall be with fewer lines of arguments if fewer than number remain. A line is considered to end with the first <newline> unless the last character of the line is an unescaped <blank>; a trailing unescaped <blank> signals continuation to the next non-empty line, inclusive. -n number Invoke utility using as many standard input arguments as possible, up to number (a positive decimal integer) arguments maximum. Fewer arguments shall be used if: * The command line length accumulated exceeds the size specified by the -s option (or {LINE_MAX} if there is no -s option). * The last iteration has fewer than number, but not zero, operands remaining. -p Prompt mode: the user is asked whether to execute utility at each invocation. Trace mode (-t) is turned on to write the command instance to be executed, followed by a prompt to standard error. An affirmative response read from /dev/tty shall execute the command; otherwise, that particular invocation of utility shall be skipped. -s size Invoke utility using as many standard input arguments as possible yielding a command line length less than size (a positive decimal integer) bytes. Fewer arguments shall be used if: * The total number of arguments exceeds that specified by the -n option. * The total number of lines exceeds that specified by the -L option. * End-of-file is encountered on standard input before size bytes are accumulated. Values of size up to at least {LINE_MAX} bytes shall be supported, provided that the constraints specified in the DESCRIPTION are met. It shall not be considered an error if a value larger than that supported by the implementation or exceeding the constraints specified in the DESCRIPTION is given; xargs shall use the largest value it supports within the constraints. -t Enable trace mode. Each generated command line shall be written to standard error just prior to invocation. -x Terminate if a constructed command line will not fit in the implied or specified size (see the -s option above).
# xargs > Execute a command with piped arguments coming from another command, a file, > etc. The input is treated as a single block of text and split into separate > pieces on spaces, tabs, newlines and end-of-file. More information: > https://pubs.opengroup.org/onlinepubs/9699919799/utilities/xargs.html. * Run a command using the input data as arguments: `{{arguments_source}} | xargs {{command}}` * Run multiple chained commands on the input data: `{{arguments_source}} | xargs sh -c "{{command1}} && {{command2}} | {{command3}}"` * Delete all files with a `.backup` extension (`-print0` uses a null character to split file names, and `-0` uses it as delimiter): `find . -name {{'*.backup'}} -print0 | xargs -0 rm -v` * Execute the command once for each input line, replacing any occurrences of the placeholder (here marked as `_`) with the input line: `{{arguments_source}} | xargs -I _ {{command}} _ {{optional_extra_arguments}}` * Parallel runs of up to `max-procs` processes at a time; the default is 1. If `max-procs` is 0, xargs will run as many processes as possible at a time: `{{arguments_source}} | xargs -P {{max-procs}} {{command}}`
stty
Print or change terminal characteristics. Mandatory arguments to long options are mandatory for short options too. -a, --all print all current settings in human-readable form -g, --save print all current settings in a stty-readable form -F, --file=DEVICE open and use the specified DEVICE instead of stdin --help display this help and exit --version output version information and exit Optional - before SETTING indicates negation. An * marks non-POSIX settings. The underlying system defines which settings are available. Special characters: * discard CHAR CHAR will toggle discarding of output eof CHAR CHAR will send an end of file (terminate the input) eol CHAR CHAR will end the line * eol2 CHAR alternate CHAR for ending the line erase CHAR CHAR will erase the last character typed intr CHAR CHAR will send an interrupt signal kill CHAR CHAR will erase the current line * lnext CHAR CHAR will enter the next character quoted quit CHAR CHAR will send a quit signal * rprnt CHAR CHAR will redraw the current line start CHAR CHAR will restart the output after stopping it stop CHAR CHAR will stop the output susp CHAR CHAR will send a terminal stop signal * swtch CHAR CHAR will switch to a different shell layer * werase CHAR CHAR will erase the last word typed Special settings: N set the input and output speeds to N bauds * cols N tell the kernel that the terminal has N columns * columns N same as cols N * [-]drain wait for transmission before applying settings (on by default) ispeed N set the input speed to N * line N use line discipline N min N with -icanon, set N characters minimum for a completed read ospeed N set the output speed to N * rows N tell the kernel that the terminal has N rows * size print the number of rows and columns according to the kernel speed print the terminal speed time N with -icanon, set read timeout of N tenths of a second Control settings: [-]clocal disable modem control signals [-]cread allow input to be received * [-]crtscts enable RTS/CTS handshaking csN set character size to N bits, N in [5..8] [-]cstopb use two stop bits per character (one with '-') [-]hup send a hangup signal when the last process closes the tty [-]hupcl same as [-]hup [-]parenb generate parity bit in output and expect parity bit in input [-]parodd set odd parity (or even parity with '-') * [-]cmspar use "stick" (mark/space) parity Input settings: [-]brkint breaks cause an interrupt signal [-]icrnl translate carriage return to newline [-]ignbrk ignore break characters [-]igncr ignore carriage return [-]ignpar ignore characters with parity errors * [-]imaxbel beep and do not flush a full input buffer on a character [-]inlcr translate newline to carriage return [-]inpck enable input parity checking [-]istrip clear high (8th) bit of input characters * [-]iutf8 assume input characters are UTF-8 encoded * [-]iuclc translate uppercase characters to lowercase * [-]ixany let any character restart output, not only start character [-]ixoff enable sending of start/stop characters [-]ixon enable XON/XOFF flow control [-]parmrk mark parity errors (with a 255-0-character sequence) [-]tandem same as [-]ixoff Output settings: * bsN backspace delay style, N in [0..1] * crN carriage return delay style, N in [0..3] * ffN form feed delay style, N in [0..1] * nlN newline delay style, N in [0..1] * [-]ocrnl translate carriage return to newline * [-]ofdel use delete characters for fill instead of NUL characters * [-]ofill use fill (padding) characters instead of timing for delays * [-]olcuc translate lowercase characters to uppercase * [-]onlcr translate newline to carriage return-newline * [-]onlret newline performs a carriage return * [-]onocr do not print carriage returns in the first column [-]opost postprocess output * tabN horizontal tab delay style, N in [0..3] * tabs same as tab0 * -tabs same as tab3 * vtN vertical tab delay style, N in [0..1] Local settings: [-]crterase echo erase characters as backspace-space-backspace * crtkill kill all line by obeying the echoprt and echoe settings * -crtkill kill all line by obeying the echoctl and echok settings * [-]ctlecho echo control characters in hat notation ('^c') [-]echo echo input characters * [-]echoctl same as [-]ctlecho [-]echoe same as [-]crterase [-]echok echo a newline after a kill character * [-]echoke same as [-]crtkill [-]echonl echo newline even if not echoing other characters * [-]echoprt echo erased characters backward, between '\' and '/' * [-]extproc enable "LINEMODE"; useful with high latency links * [-]flusho discard output [-]icanon enable special characters: erase, kill, werase, rprnt [-]iexten enable non-POSIX special characters [-]isig enable interrupt, quit, and suspend special characters [-]noflsh disable flushing after interrupt and quit special characters * [-]prterase same as [-]echoprt * [-]tostop stop background jobs that try to write to the terminal * [-]xcase with icanon, escape with '\' for uppercase characters Combination settings: * [-]LCASE same as [-]lcase cbreak same as -icanon -cbreak same as icanon cooked same as brkint ignpar istrip icrnl ixon opost isig icanon, eof and eol characters to their default values -cooked same as raw crt same as echoe echoctl echoke dec same as echoe echoctl echoke -ixany intr ^c erase 0177 kill ^u * [-]decctlq same as [-]ixany ek erase and kill characters to their default values evenp same as parenb -parodd cs7 -evenp same as -parenb cs8 * [-]lcase same as xcase iuclc olcuc litout same as -parenb -istrip -opost cs8 -litout same as parenb istrip opost cs7 nl same as -icrnl -onlcr -nl same as icrnl -inlcr -igncr onlcr -ocrnl -onlret oddp same as parenb parodd cs7 -oddp same as -parenb cs8 [-]parity same as [-]evenp pass8 same as -parenb -istrip cs8 -pass8 same as parenb istrip cs7 raw same as -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -icanon -opost -isig -iuclc -ixany -imaxbel -xcase min 1 time 0 -raw same as cooked sane same as cread -ignbrk brkint -inlcr -igncr icrnl icanon iexten echo echoe echok -echonl -noflsh -ixoff -iutf8 -iuclc -ixany imaxbel -xcase -olcuc -ocrnl opost -ofill onlcr -onocr -onlret nl0 cr0 tab0 bs0 vt0 ff0 isig -tostop -ofdel -echoprt echoctl echoke -extproc -flusho, all special characters to their default values Handle the tty line connected to standard input. Without arguments, prints baud rate, line discipline, and deviations from stty sane. In settings, CHAR is taken literally, or coded as in ^c, 0x37, 0177 or 127; special values ^- or undef used to disable special characters.
# stty > Set options for a terminal device interface. More information: > https://www.gnu.org/software/coreutils/stty. * Display all settings for the current terminal: `stty --all` * Set the number of rows or columns: `stty {{rows|cols}} {{count}}` * Get the actual transfer speed of a device: `stty --file {{path/to/device_file}} speed` * Reset all modes to reasonable values for the current terminal: `stty sane`
git-ls-files
This merges the file listing in the index with the actual working directory list, and shows different combinations of the two. One or more of the options below may be used to determine the files shown, and each file may be printed multiple times if there are multiple entries in the index or multiple statuses are applicable for the relevant file selection options. -c, --cached Show all files cached in Git’s index, i.e. all tracked files. (This is the default if no -c/-s/-d/-o/-u/-k/-m/--resolve-undo options are specified.) -d, --deleted Show files with an unstaged deletion -m, --modified Show files with an unstaged modification (note that an unstaged deletion also counts as an unstaged modification) -o, --others Show other (i.e. untracked) files in the output -i, --ignored Show only ignored files in the output. Must be used with either an explicit -c or -o. When showing files in the index (i.e. when used with -c), print only those files matching an exclude pattern. When showing "other" files (i.e. when used with -o), show only those matched by an exclude pattern. Standard ignore rules are not automatically activated, therefore at least one of the --exclude* options is required. -s, --stage Show staged contents' mode bits, object name and stage number in the output. --directory If a whole directory is classified as "other", show just its name (with a trailing slash) and not its whole contents. Has no effect without -o/--others. --no-empty-directory Do not list empty directories. Has no effect without --directory. -u, --unmerged Show information about unmerged files in the output, but do not show any other tracked files (forces --stage, overrides --cached). -k, --killed Show untracked files on the filesystem that need to be removed due to file/directory conflicts for tracked files to be able to be written to the filesystem. --resolve-undo Show files having resolve-undo information in the index together with their resolve-undo information. (resolve-undo information is what is used to implement "git checkout -m $PATH", i.e. to recreate merge conflicts that were accidentally resolved) -z \0 line termination on output and do not quote filenames. See OUTPUT below for more information. --deduplicate When only filenames are shown, suppress duplicates that may come from having multiple stages during a merge, or giving --deleted and --modified option at the same time. When any of the -t, --unmerged, or --stage option is in use, this option has no effect. -x <pattern>, --exclude=<pattern> Skip untracked files matching pattern. Note that pattern is a shell wildcard pattern. See EXCLUDE PATTERNS below for more information. -X <file>, --exclude-from=<file> Read exclude patterns from <file>; 1 per line. --exclude-per-directory=<file> Read additional exclude patterns that apply only to the directory and its subdirectories in <file>. Deprecated; use --exclude-standard instead. --exclude-standard Add the standard Git exclusions: .git/info/exclude, .gitignore in each directory, and the user’s global exclusion file. --error-unmatch If any <file> does not appear in the index, treat this as an error (return 1). --with-tree=<tree-ish> When using --error-unmatch to expand the user supplied <file> (i.e. path pattern) arguments to paths, pretend that paths which were removed in the index since the named <tree-ish> are still present. Using this option with -s or -u options does not make any sense. -t Show status tags together with filenames. Note that for scripting purposes, git-status(1) --porcelain and git-diff-files(1) --name-status are almost always superior alternatives, and users should look at git-status(1) --short or git-diff(1) --name-status for more user-friendly alternatives. This option provides a reason for showing each filename, in the form of a status tag (which is followed by a space and then the filename). The status tags are all single characters from the following list: H tracked file that is not either unmerged or skip-worktree S tracked file that is skip-worktree M tracked file that is unmerged R tracked file with unstaged removal/deletion C tracked file with unstaged modification/change K untracked paths which are part of file/directory conflicts which prevent checking out tracked files ? untracked file U file with resolve-undo information -v Similar to -t, but use lowercase letters for files that are marked as assume unchanged (see git-update-index(1)). -f Similar to -t, but use lowercase letters for files that are marked as fsmonitor valid (see git-update-index(1)). --full-name When run from a subdirectory, the command usually outputs paths relative to the current directory. This option forces paths to be output relative to the project top directory. --recurse-submodules Recursively calls ls-files on each active submodule in the repository. Currently there is only support for the --cached and --stage modes. --abbrev[=<n>] Instead of showing the full 40-byte hexadecimal object lines, show the shortest prefix that is at least <n> hexdigits long that uniquely refers the object. Non default number of digits can be specified with --abbrev=<n>. --debug After each line that describes a file, add more data about its cache entry. This is intended to show as much information as possible for manual inspection; the exact format may change at any time. --eol Show <eolinfo> and <eolattr> of files. <eolinfo> is the file content identification used by Git when the "text" attribute is "auto" (or not set and core.autocrlf is not false). <eolinfo> is either "-text", "none", "lf", "crlf", "mixed" or "". "" means the file is not a regular file, it is not in the index or not accessible in the working tree. <eolattr> is the attribute that is used when checking out or committing, it is either "", "-text", "text", "text=auto", "text eol=lf", "text eol=crlf". Since Git 2.10 "text=auto eol=lf" and "text=auto eol=crlf" are supported. Both the <eolinfo> in the index ("i/<eolinfo>") and in the working tree ("w/<eolinfo>") are shown for regular files, followed by the ("attr/<eolattr>"). --sparse If the index is sparse, show the sparse directories without expanding to the contained files. Sparse directories will be shown with a trailing slash, such as "x/" for a sparse directory "x". --format=<format> A string that interpolates %(fieldname) from the result being shown. It also interpolates %% to %, and %xx where xx are hex digits interpolates to character with hex code xx; for example %00 interpolates to \0 (NUL), %09 to \t (TAB) and %0a to \n (LF). --format cannot be combined with -s, -o, -k, -t, --resolve-undo and --eol. -- Do not interpret any more arguments as options. <file> Files to show. If no files are given all files which match the other specified criteria are shown.
# git ls-files > Show information about files in the index and the working tree. More > information: https://git-scm.com/docs/git-ls-files. * Show deleted files: `git ls-files --deleted` * Show modified and deleted files: `git ls-files --modified` * Show ignored and untracked files: `git ls-files --others` * Show untracked files, not ignored: `git ls-files --others --exclude-standard`
shred
Overwrite the specified FILE(s) repeatedly, in order to make it harder for even very expensive hardware probing to recover the data. If FILE is -, shred standard output. Mandatory arguments to long options are mandatory for short options too. -f, --force change permissions to allow writing if necessary -n, --iterations=N overwrite N times instead of the default (3) --random-source=FILE get random bytes from FILE -s, --size=N shred this many bytes (suffixes like K, M, G accepted) -u deallocate and remove file after overwriting --remove[=HOW] like -u but give control on HOW to delete; See below -v, --verbose show progress -x, --exact do not round file sizes up to the next full block; this is the default for non-regular files -z, --zero add a final overwrite with zeros to hide shredding --help display this help and exit --version output version information and exit Delete FILE(s) if --remove (-u) is specified. The default is not to remove the files because it is common to operate on device files like /dev/hda, and those files usually should not be removed. The optional HOW parameter indicates how to remove a directory entry: 'unlink' => use a standard unlink call. 'wipe' => also first obfuscate bytes in the name. 'wipesync' => also sync each obfuscated byte to the device. The default mode is 'wipesync', but note it can be expensive. CAUTION: shred assumes the file system and hardware overwrite data in place. Although this is common, many platforms operate otherwise. Also, backups and mirrors may contain unremovable copies that will let a shredded file be recovered later. See the GNU coreutils manual for details.
# shred > Overwrite files to securely delete data. More information: > https://www.gnu.org/software/coreutils/shred. * Overwrite a file: `shred {{path/to/file}}` * Overwrite a file, leaving zeroes instead of random data: `shred --zero {{path/to/file}}` * Overwrite a file 25 times: `shred -n25 {{path/to/file}}` * Overwrite a file and remove it: `shred --remove {{path/to/file}}`
tac
Write each FILE to standard output, last line first. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -b, --before attach the separator before instead of after -r, --regex interpret the separator as a regular expression -s, --separator=STRING use STRING as the separator instead of newline --help display this help and exit --version output version information and exit
# tac > Display and concatenate files with lines in reversed order. See also: `cat`. > More information: https://www.gnu.org/software/coreutils/tac. * Concatenate specific files in reversed order: `tac {{path/to/file1 path/to/file2 ...}}` * Display `stdin` in reversed order: `{{cat path/to/file}} | tac` * Use a specific [s]eparator: `tac -s {{separator}} {{path/to/file1 path/to/file2 ...}}` * Use a specific [r]egex as a [s]eparator: `tac -r -s {{separator}} {{path/to/file1 path/to/file2 ...}}` * Use a separator [b]efore each file: `tac -b {{path/to/file1 path/to/file2 ...}}`
write
The write utility shall read lines from the standard input and write them to the terminal of the specified user. When first invoked, it shall write the message: Message from sender-login-id (sending-terminal) [date]... to user_name. When it has successfully completed the connection, the sender's terminal shall be alerted twice to indicate that what the sender is typing is being written to the recipient's terminal. If the recipient wants to reply, this can be accomplished by typing: write sender-login-id [sending-terminal] upon receipt of the initial message. Whenever a line of input as delimited by an NL, EOF, or EOL special character (see the Base Definitions volume of POSIX.1‐2017, Chapter 11, General Terminal Interface) is accumulated while in canonical input mode, the accumulated data shall be written on the other user's terminal. Characters shall be processed as follows: * Typing <alert> shall write the <alert> character to the recipient's terminal. * Typing the erase and kill characters shall affect the sender's terminal in the manner described by the termios interface in the Base Definitions volume of POSIX.1‐2017, Chapter 11, General Terminal Interface. * Typing the interrupt or end-of-file characters shall cause write to write an appropriate message ("EOT\n" in the POSIX locale) to the recipient's terminal and exit. * Typing characters from LC_CTYPE classifications print or space shall cause those characters to be sent to the recipient's terminal. * When and only when the stty iexten local mode is enabled, the existence and processing of additional special control characters and multi-byte or single-byte functions is implementation-defined. * Typing other non-printable characters shall cause implementation-defined sequences of printable characters to be written to the recipient's terminal. To write to a user who is logged in more than once, the terminal argument can be used to indicate which terminal to write to; otherwise, the recipient's terminal is selected in an implementation-defined manner and an informational message is written to the sender's standard output, indicating which terminal was chosen. Permission to be a recipient of a write message can be denied or granted by use of the mesg utility. However, a user's privilege may further constrain the domain of accessibility of other users' terminals. The write utility shall fail when the user lacks appropriate privileges to perform the requested action. None.
# write > Write a message on the terminal of a specified logged in user (ctrl-C to > stop writing messages). Use the `who` command to find out all terminal_ids > of all active users active on the system. See also `mesg`. More information: > https://manned.org/write. * Send a message to a given user on a given terminal id: `write {{username}} {{terminal_id}}` * Send message to "testuser" on terminal `/dev/tty/5`: `write {{testuser}} {{tty/5}}` * Send message to "johndoe" on pseudo terminal `/dev/pts/5`: `write {{johndoe}} {{pts/5}}`
git-ls-remote
Displays references available in a remote repository along with the associated commit IDs. -h, --heads, -t, --tags Limit to only refs/heads and refs/tags, respectively. These options are not mutually exclusive; when given both, references stored in refs/heads and refs/tags are displayed. Note that git ls-remote -h used without anything else on the command line gives help, consistent with other git subcommands. --refs Do not show peeled tags or pseudorefs like HEAD in the output. -q, --quiet Do not print remote URL to stderr. --upload-pack=<exec> Specify the full path of git-upload-pack on the remote host. This allows listing references from repositories accessed via SSH and where the SSH daemon does not use the PATH configured by the user. --exit-code Exit with status "2" when no matching refs are found in the remote repository. Usually the command exits with status "0" to indicate it successfully talked with the remote repository, whether it found any matching refs. --get-url Expand the URL of the given remote repository taking into account any "url.<base>.insteadOf" config setting (See git-config(1)) and exit without talking to the remote. --symref In addition to the object pointed by it, show the underlying ref pointed by it when showing a symbolic ref. Currently, upload-pack only shows the symref HEAD, so it will be the only one shown by ls-remote. --sort=<key> Sort based on the key given. Prefix - to sort in descending order of the value. Supports "version:refname" or "v:refname" (tag names are treated as versions). The "version:refname" sort order can also be affected by the "versionsort.suffix" configuration variable. See git-for-each-ref(1) for more sort options, but be aware keys like committerdate that require access to the objects themselves will not work for refs whose objects have not yet been fetched from the remote, and will give a missing object error. -o <option>, --server-option=<option> Transmit the given string to the server when communicating using protocol version 2. The given string must not contain a NUL or LF character. When multiple --server-option=<option> are given, they are all sent to the other side in the order listed on the command line. <repository> The "remote" repository to query. This parameter can be either a URL or the name of a remote (see the GIT URLS and REMOTES sections of git-fetch(1)). <patterns>... When unspecified, all references, after filtering done with --heads and --tags, are shown. When <patterns>... are specified, only references matching one or more of the given patterns are displayed. Each pattern is interpreted as a glob (see glob in gitglossary(7)) which is matched against the "tail" of a ref, starting either from the start of the ref (so a full name like refs/heads/foo matches) or from a slash separator (so bar matches refs/heads/bar but not refs/heads/foobar).
# git ls-remote > Git command for listing references in a remote repository based on name or > URL. If no name or URL are given, then the configured upstream branch will > be used, or remote origin if the former is not configured. More information: > https://git-scm.com/docs/git-ls-remote. * Show all references in the default remote repository: `git ls-remote` * Show only heads references in the default remote repository: `git ls-remote --heads` * Show only tags references in the default remote repository: `git ls-remote --tags` * Show all references from a remote repository based on name or URL: `git ls-remote {{repository_url}}` * Show references from a remote repository filtered by a pattern: `git ls-remote {{repository_name}} "{{pattern}}"`
git-merge
Incorporates changes from the named commits (since the time their histories diverged from the current branch) into the current branch. This command is used by git pull to incorporate changes from another repository and can be used by hand to merge changes from one branch into another. Assume the following history exists and the current branch is "master": A---B---C topic / D---E---F---G master Then "git merge topic" will replay the changes made on the topic branch since it diverged from master (i.e., E) until its current commit (C) on top of master, and record the result in a new commit along with the names of the two parent commits and a log message from the user describing the changes. Before the operation, ORIG_HEAD is set to the tip of the current branch (C). A---B---C topic / \ D---E---F---G---H master The second syntax ("git merge --abort") can only be run after the merge has resulted in conflicts. git merge --abort will abort the merge process and try to reconstruct the pre-merge state. However, if there were uncommitted changes when the merge started (and especially if those changes were further modified after the merge was started), git merge --abort will in some cases be unable to reconstruct the original (pre-merge) changes. Therefore: Warning: Running git merge with non-trivial uncommitted changes is discouraged: while possible, it may leave you in a state that is hard to back out of in the case of a conflict. The third syntax ("git merge --continue") can only be run after the merge has resulted in conflicts. --commit, --no-commit Perform the merge and commit the result. This option can be used to override --no-commit. With --no-commit perform the merge and stop just before creating a merge commit, to give the user a chance to inspect and further tweak the merge result before committing. Note that fast-forward updates do not create a merge commit and therefore there is no way to stop those merges with --no-commit. Thus, if you want to ensure your branch is not changed or updated by the merge command, use --no-ff with --no-commit. --edit, -e, --no-edit Invoke an editor before committing successful mechanical merge to further edit the auto-generated merge message, so that the user can explain and justify the merge. The --no-edit option can be used to accept the auto-generated message (this is generally discouraged). The --edit (or -e) option is still useful if you are giving a draft message with the -m option from the command line and want to edit it in the editor. Older scripts may depend on the historical behaviour of not allowing the user to edit the merge log message. They will see an editor opened when they run git merge. To make it easier to adjust such scripts to the updated behaviour, the environment variable GIT_MERGE_AUTOEDIT can be set to no at the beginning of them. --cleanup=<mode> This option determines how the merge message will be cleaned up before committing. See git-commit(1) for more details. In addition, if the <mode> is given a value of scissors, scissors will be appended to MERGE_MSG before being passed on to the commit machinery in the case of a merge conflict. --ff, --no-ff, --ff-only Specifies how a merge is handled when the merged-in history is already a descendant of the current history. --ff is the default unless merging an annotated (and possibly signed) tag that is not stored in its natural place in the refs/tags/ hierarchy, in which case --no-ff is assumed. With --ff, when possible resolve the merge as a fast-forward (only update the branch pointer to match the merged branch; do not create a merge commit). When not possible (when the merged-in history is not a descendant of the current history), create a merge commit. With --no-ff, create a merge commit in all cases, even when the merge could instead be resolved as a fast-forward. With --ff-only, resolve the merge as a fast-forward when possible. When not possible, refuse to merge and exit with a non-zero status. -S[<keyid>], --gpg-sign[=<keyid>], --no-gpg-sign GPG-sign the resulting merge commit. The keyid argument is optional and defaults to the committer identity; if specified, it must be stuck to the option without a space. --no-gpg-sign is useful to countermand both commit.gpgSign configuration variable, and earlier --gpg-sign. --log[=<n>], --no-log In addition to branch names, populate the log message with one-line descriptions from at most <n> actual commits that are being merged. See also git-fmt-merge-msg(1). With --no-log do not list one-line descriptions from the actual commits being merged. --signoff, --no-signoff Add a Signed-off-by trailer by the committer at the end of the commit log message. The meaning of a signoff depends on the project to which you’re committing. For example, it may certify that the committer has the rights to submit the work under the project’s license or agrees to some contributor representation, such as a Developer Certificate of Origin. (See http://developercertificate.org for the one used by the Linux kernel and Git projects.) Consult the documentation or leadership of the project to which you’re contributing to understand how the signoffs are used in that project. The --no-signoff option can be used to countermand an earlier --signoff option on the command line. --stat, -n, --no-stat Show a diffstat at the end of the merge. The diffstat is also controlled by the configuration option merge.stat. With -n or --no-stat do not show a diffstat at the end of the merge. --squash, --no-squash Produce the working tree and index state as if a real merge happened (except for the merge information), but do not actually make a commit, move the HEAD, or record $GIT_DIR/MERGE_HEAD (to cause the next git commit command to create a merge commit). This allows you to create a single commit on top of the current branch whose effect is the same as merging another branch (or more in case of an octopus). With --no-squash perform the merge and commit the result. This option can be used to override --squash. With --squash, --commit is not allowed, and will fail. --[no-]verify By default, the pre-merge and commit-msg hooks are run. When --no-verify is given, these are bypassed. See also githooks(5). -s <strategy>, --strategy=<strategy> Use the given merge strategy; can be supplied more than once to specify them in the order they should be tried. If there is no -s option, a built-in list of strategies is used instead (ort when merging a single head, octopus otherwise). -X <option>, --strategy-option=<option> Pass merge strategy specific option through to the merge strategy. --verify-signatures, --no-verify-signatures Verify that the tip commit of the side branch being merged is signed with a valid key, i.e. a key that has a valid uid: in the default trust model, this means the signing key has been signed by a trusted key. If the tip commit of the side branch is not signed with a valid key, the merge is aborted. --summary, --no-summary Synonyms to --stat and --no-stat; these are deprecated and will be removed in the future. -q, --quiet Operate quietly. Implies --no-progress. -v, --verbose Be verbose. --progress, --no-progress Turn progress on/off explicitly. If neither is specified, progress is shown if standard error is connected to a terminal. Note that not all merge strategies may support progress reporting. --autostash, --no-autostash Automatically create a temporary stash entry before the operation begins, record it in the special ref MERGE_AUTOSTASH and apply it after the operation ends. This means that you can run the operation on a dirty worktree. However, use with care: the final stash application after a successful merge might result in non-trivial conflicts. --allow-unrelated-histories By default, git merge command refuses to merge histories that do not share a common ancestor. This option can be used to override this safety when merging histories of two projects that started their lives independently. As that is a very rare occasion, no configuration variable to enable this by default exists and will not be added. -m <msg> Set the commit message to be used for the merge commit (in case one is created). If --log is specified, a shortlog of the commits being merged will be appended to the specified message. The git fmt-merge-msg command can be used to give a good default for automated git merge invocations. The automated message can include the branch description. --into-name <branch> Prepare the default merge message as if merging to the branch <branch>, instead of the name of the real branch to which the merge is made. -F <file>, --file=<file> Read the commit message to be used for the merge commit (in case one is created). If --log is specified, a shortlog of the commits being merged will be appended to the specified message. --rerere-autoupdate, --no-rerere-autoupdate After the rerere mechanism reuses a recorded resolution on the current conflict to update the files in the working tree, allow it to also update the index with the result of resolution. --no-rerere-autoupdate is a good way to double-check what rerere did and catch potential mismerges, before committing the result to the index with a separate git add. --overwrite-ignore, --no-overwrite-ignore Silently overwrite ignored files from the merge result. This is the default behavior. Use --no-overwrite-ignore to abort. --abort Abort the current conflict resolution process, and try to reconstruct the pre-merge state. If an autostash entry is present, apply it to the worktree. If there were uncommitted worktree changes present when the merge started, git merge --abort will in some cases be unable to reconstruct these changes. It is therefore recommended to always commit or stash your changes before running git merge. git merge --abort is equivalent to git reset --merge when MERGE_HEAD is present unless MERGE_AUTOSTASH is also present in which case git merge --abort applies the stash entry to the worktree whereas git reset --merge will save the stashed changes in the stash list. --quit Forget about the current merge in progress. Leave the index and the working tree as-is. If MERGE_AUTOSTASH is present, the stash entry will be saved to the stash list. --continue After a git merge stops due to conflicts you can conclude the merge by running git merge --continue (see "HOW TO RESOLVE CONFLICTS" section below). <commit>... Commits, usually other branch heads, to merge into our branch. Specifying more than one commit will create a merge with more than two parents (affectionately called an Octopus merge). If no commit is given from the command line, merge the remote-tracking branches that the current branch is configured to use as its upstream. See also the configuration section of this manual page. When FETCH_HEAD (and no other commit) is specified, the branches recorded in the .git/FETCH_HEAD file by the previous invocation of git fetch for merging are merged to the current branch.
# git merge > Merge branches. More information: https://git-scm.com/docs/git-merge. * Merge a branch into your current branch: `git merge {{branch_name}}` * Edit the merge message: `git merge --edit {{branch_name}}` * Merge a branch and create a merge commit: `git merge --no-ff {{branch_name}}` * Abort a merge in case of conflicts: `git merge --abort` * Merge using a specific strategy: `git merge --strategy {{strategy}} --strategy-option {{strategy_option}} {{branch_name}}`
chown
The chown utility shall set the user ID of the file named by each file operand to the user ID specified by the owner operand. For each file operand, or, if the -R option is used, each file encountered while walking the directory trees specified by the file operands, the chown utility shall perform actions equivalent to the chown() function defined in the System Interfaces volume of POSIX.1‐2017, called with the following arguments: 1. The file operand shall be used as the path argument. 2. The user ID indicated by the owner portion of the first operand shall be used as the owner argument. 3. If the group portion of the first operand is given, the group ID indicated by it shall be used as the group argument; otherwise, the group ownership shall not be changed. Unless chown is invoked by a process with appropriate privileges, the set-user-ID and set-group-ID bits of a regular file shall be cleared upon successful completion; the set-user-ID and set- group-ID bits of other file types may be cleared. The chown utility shall conform to the Base Definitions volume of POSIX.1‐2017, Section 12.2, Utility Syntax Guidelines. The following options shall be supported by the implementation: -h For each file operand that names a file of type symbolic link, chown shall attempt to set the user ID of the symbolic link. If a group ID was specified, for each file operand that names a file of type symbolic link, chown shall attempt to set the group ID of the symbolic link. -H If the -R option is specified and a symbolic link referencing a file of type directory is specified on the command line, chown shall change the user ID (and group ID, if specified) of the directory referenced by the symbolic link and all files in the file hierarchy below it. -L If the -R option is specified and a symbolic link referencing a file of type directory is specified on the command line or encountered during the traversal of a file hierarchy, chown shall change the user ID (and group ID, if specified) of the directory referenced by the symbolic link and all files in the file hierarchy below it. -P If the -R option is specified and a symbolic link is specified on the command line or encountered during the traversal of a file hierarchy, chown shall change the owner ID (and group ID, if specified) of the symbolic link. The chown utility shall not follow the symbolic link to any other part of the file hierarchy. -R Recursively change file user and group IDs. For each file operand that names a directory, chown shall change the user ID (and group ID, if specified) of the directory and all files in the file hierarchy below it. Unless a -H, -L, or -P option is specified, it is unspecified which of these options will be used as the default. Specifying more than one of the mutually-exclusive options -H, -L, and -P shall not be considered an error. The last option specified shall determine the behavior of the utility.
# chown > Change user and group ownership of files and directories. More information: > https://www.gnu.org/software/coreutils/chown. * Change the owner user of a file/directory: `chown {{user}} {{path/to/file_or_directory}}` * Change the owner user and group of a file/directory: `chown {{user}}:{{group}} {{path/to/file_or_directory}}` * Recursively change the owner of a directory and its contents: `chown -R {{user}} {{path/to/directory}}` * Change the owner of a symbolic link: `chown -h {{user}} {{path/to/symlink}}` * Change the owner of a file/directory to match a reference file: `chown --reference={{path/to/reference_file}} {{path/to/file_or_directory}}`
sshfs
SSHFS allows you to mount a remote filesystem using SSH (more precisely, the SFTP subsystem). Most SSH servers support and enable this SFTP access by default, so SSHFS is very simple to use - there's nothing to do on the server-side. By default, file permissions are ignored by SSHFS. Any user that can access the filesystem will be able to perform any operation that the remote server permits - based on the credentials that were used to connect to the server. If this is undesired, local permission checking can be enabled with -o default_permissions. By default, only the mounting user will be able to access the filesystem. Access for other users can be enabled by passing -o allow_other. In this case you most likely also want to use -o default_permissions. It is recommended to run SSHFS as regular user (not as root). For this to work the mountpoint must be owned by the user. If username is omitted SSHFS will use the local username. If the directory is omitted, SSHFS will mount the (remote) home directory. If you need to enter a password sshfs will ask for it (actually it just runs ssh which ask for the password if needed). -o opt,[opt...] mount options, see below for details. A a variety of SSH options can be given here as well, see the manual pages for sftp(1) and ssh_config(5). -h, --help print help and exit. -V, --version print version information and exit. -d, --debug print debugging information. -p PORT equivalent to '-o port=PORT' -f do not daemonize, stay in foreground. -s Single threaded operation. -C equivalent to '-o compression=yes' -F ssh_configfile specifies alternative ssh configuration file -1 equivalent to '-o ssh_protocol=1' -o reconnect automatically reconnect to server if connection is interrupted. Attempts to access files that were opened before the reconnection will give errors and need to be re-opened. -o delay_connect Don't immediately connect to server, wait until mountpoint is first accessed. -o sshfs_sync synchronous writes. This will slow things down, but may be useful in some situations. -o no_readahead Only read exactly the data that was requested, instead of speculatively reading more to anticipate the next read request. -o sync_readdir synchronous readdir. This will slow things down, but may be useful in some situations. -o workaround=LIST Enable the specified workaround. See the Caveats section below for some additional information. Possible values are: rename Emulate overwriting an existing file by deleting and renaming. renamexdev Make rename fail with EXDEV instead of the default EPERM to allow moving files across remote filesystems. truncate Work around servers that don't support truncate by coping the whole file, truncating it locally, and sending it back. fstat Work around broken servers that don't support fstat() by using stat instead. buflimit Work around OpenSSH "buffer fillup" bug. createmode Work around broken servers that produce an error when passing a non-zero mode to create, by always passing a mode of 0. -o idmap=TYPE How to map remote UID/GIDs to local values. Possible values are: none no translation of the ID space (default). user map the UID/GID of the remote user to UID/GID of the mounting user. file translate UIDs/GIDs based upon the contents of --uidfile and --gidfile. -o uidfile=FILE file containing username:uid mappings for -o idmap=file -o gidfile=FILE file containing groupname:gid mappings for -o idmap=file -o nomap=TYPE with idmap=file, how to handle missing mappings: ignore don't do any re-mapping error return an error (default) -o ssh_command=CMD execute CMD instead of 'ssh' -o ssh_protocol=N ssh protocol to use (default: 2) -o sftp_server=SERV path to sftp server or subsystem (default: sftp) -o directport=PORT directly connect to PORT bypassing ssh -o passive communicate over stdin and stdout bypassing network. Useful for mounting local filesystem on the remote side. An example using dpipe command would be dpipe /usr/lib/openssh/sftp-server = ssh RemoteHostname sshfs :/directory/to/be/shared ~/mnt/src -o passive -o disable_hardlink With this option set, attempts to call link(2) will fail with error code ENOSYS. -o transform_symlinks transform absolute symlinks on remote side to relative symlinks. This means that if e.g. on the server side /foo/bar/com is a symlink to /foo/blub, SSHFS will transform the link target to ../blub on the client side. -o follow_symlinks follow symlinks on the server, i.e. present them as regular files on the client. If a symlink is dangling (i.e, the target does not exist) the behavior depends on the remote server - the entry may appear as a symlink on the client, or it may appear as a regular file that cannot be accessed. -o no_check_root don't check for existence of 'dir' on server -o password_stdin read password from stdin (only for pam_mount!) -o dir_cache=BOOL Enables (yes) or disables (no) the SSHFS directory cache. The directory cache holds the names of directory entries. Enabling it allows readdir(3) system calls to be processed without network access. -o dcache_max_size=N sets the maximum size of the directory cache. -o dcache_timeout=N sets timeout for directory cache in seconds. -o dcache_{stat,link,dir}_timeout=N sets separate timeout for {attributes, symlinks, names} in the directory cache. -o dcache_clean_interval=N sets the interval for automatic cleaning of the directory cache. -o dcache_min_clean_interval=N sets the interval for forced cleaning of the directory cache when full. -o direct_io This option disables the use of page cache (file content cache) in the kernel for this filesystem. This has several affects: 1. Each read() or write() system call will initiate one or more read or write operations, data will not be cached in the kernel. 2. The return value of the read() and write() system calls will correspond to the return values of the read and write operations. This is useful for example if the file size is not known in advance (before reading it). e.g. /proc filesystem -o max_conns=N sets the maximum number of simultaneous SSH connections to use. Each connection is established with a separate SSH process. The primary purpose of this feature is to improve the responsiveness of the file system during large file transfers. When using more than once connection, the password_stdin and passive options can not be used, and the buflimit workaround is not supported. In addition, SSHFS accepts several options common to all FUSE file systems. These are described in the mount.fuse manpage (look for "general", "libfuse specific", and "high-level API" options).
# sshfs > Filesystem client based on SSH. More information: > https://github.com/libfuse/sshfs. * Mount remote directory: `sshfs {{username}}@{{remote_host}}:{{remote_directory}} {{mountpoint}}` * Unmount remote directory: `umount {{mountpoint}}` * Mount remote directory from server with specific port: `sshfs {{username}}@{{remote_host}}:{{remote_directory}} -p {{2222}}` * Use compression: `sshfs {{username}}@{{remote_host}}:{{remote_directory}} -C` * Follow symbolic links: `sshfs -o follow_symlinks {{username}}@{{remote_host}}:{{remote_directory}} {{mountpoint}}`
sleep
Pause for NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days. NUMBER need not be an integer. Given two or more arguments, pause for the amount of time specified by the sum of their values. --help display this help and exit --version output version information and exit
# sleep > Delay for a specified amount of time. More information: > https://pubs.opengroup.org/onlinepubs/9699919799/utilities/sleep.html. * Delay in seconds: `sleep {{seconds}}` * Execute a specific command after 20 seconds delay: `sleep 20 && {{command}}`
manpath
If $MANPATH is set, manpath will simply display its contents and issue a warning. If not, manpath will determine a suitable manual page hierarchy search path and display the results. The colon-delimited path is determined using information gained from the man-db configuration file – (/usr/local/etc/man_db.conf) and the user's environment. -q, --quiet Do not issue warnings. -d, --debug Print debugging information. -c, --catpath Produce a catpath as opposed to a manpath. Once the manpath is determined, each path element is converted to its relative catpath. -g, --global Produce a manpath consisting of all paths named as "global" within the man-db configuration file. -m system[,...], --systems=system[,...] If this system has access to other operating systems' manual hierarchies, this option can be used to include them in the output of manpath. To include NewOS's manual page hierarchies use the option -m NewOS. The system specified can be a combination of comma delimited operating system names. To include the native operating system's manual page hierarchies, the system name man must be included in the argument string. This option will override the $SYSTEM environment variable. -C file, --config-file=file Use this user configuration file rather than the default of ~/.manpath. -?, --help Print a help message and exit. --usage Print a short usage message and exit. -V, --version Display version information.
# manpath > Determine the search path for manual pages. More information: > https://manned.org/manpath. * Display the search path used to find man pages: `manpath` * Show the entire global manpath: `manpath --global`
mv
Rename SOURCE to DEST, or move SOURCE(s) to DIRECTORY. Mandatory arguments to long options are mandatory for short options too. --backup[=CONTROL] make a backup of each existing destination file -b like --backup but does not accept an argument --debug explain how a file is copied. Implies -v -f, --force do not prompt before overwriting -i, --interactive prompt before overwrite -n, --no-clobber do not overwrite an existing file If you specify more than one of -i, -f, -n, only the final one takes effect. --no-copy do not copy if renaming fails --strip-trailing-slashes remove any trailing slashes from each SOURCE argument -S, --suffix=SUFFIX override the usual backup suffix -t, --target-directory=DIRECTORY move all SOURCE arguments into DIRECTORY -T, --no-target-directory treat DEST as a normal file --update[=UPDATE] control which existing files are updated; UPDATE={all,none,older(default)}. See below -u equivalent to --update[=older] -v, --verbose explain what is being done -Z, --context set SELinux security context of destination file to default type --help display this help and exit --version output version information and exit UPDATE controls which existing files in the destination are replaced. 'all' is the default operation when an --update option is not specified, and results in all existing files in the destination being replaced. 'none' is similar to the --no-clobber option, in that no files in the destination are replaced, but also skipped files do not induce a failure. 'older' is the default operation when --update is specified, and results in files being replaced if they're older than the corresponding source file. The backup suffix is '~', unless set with --suffix or SIMPLE_BACKUP_SUFFIX. The version control method may be selected via the --backup option or through the VERSION_CONTROL environment variable. Here are the values: none, off never make backups (even if --backup is given) numbered, t make numbered backups existing, nil numbered if numbered backups exist, simple otherwise simple, never always make simple backups
# mv > Move or rename files and directories. More information: > https://www.gnu.org/software/coreutils/mv. * Rename a file or directory when the target is not an existing directory: `mv {{path/to/source}} {{path/to/target}}` * Move a file or directory into an existing directory: `mv {{path/to/source}} {{path/to/existing_directory}}` * Move multiple files into an existing directory, keeping the filenames unchanged: `mv {{path/to/source1 path/to/source2 ...}} {{path/to/existing_directory}}` * Do not prompt for confirmation before overwriting existing files: `mv -f {{path/to/source}} {{path/to/target}}` * Prompt for confirmation before overwriting existing files, regardless of file permissions: `mv -i {{path/to/source}} {{path/to/target}}` * Do not overwrite existing files at the target: `mv -n {{path/to/source}} {{path/to/target}}` * Move files in verbose mode, showing files after they are moved: `mv -v {{path/to/source}} {{path/to/target}}`
whereis
whereis locates the binary, source and manual files for the specified command names. The supplied names are first stripped of leading pathname components. Prefixes of s. resulting from use of source code control are also dealt with. whereis then attempts to locate the desired program in the standard Linux places, and in the places specified by $PATH and $MANPATH. The search restrictions (options -b, -m and -s) are cumulative and apply to the subsequent name patterns on the command line. Any new search restriction resets the search mask. For example, whereis -bm ls tr -m gcc searches for "ls" and "tr" binaries and man pages, and for "gcc" man pages only. The options -B, -M and -S reset search paths for the subsequent name patterns. For example, whereis -m ls -M /usr/share/man/man1 -f cal searches for "ls" man pages in all default paths, but for "cal" in the /usr/share/man/man1 directory only. -b Search for binaries. -m Search for manuals. -s Search for sources. -u Only show the command names that have unusual entries. A command is said to be unusual if it does not have just one entry of each explicitly requested type. Thus 'whereis -m -u *' asks for those files in the current directory which have no documentation file, or more than one. -B list Limit the places where whereis searches for binaries, by a whitespace-separated list of directories. -M list Limit the places where whereis searches for manuals and documentation in Info format, by a whitespace-separated list of directories. -S list Limit the places where whereis searches for sources, by a whitespace-separated list of directories. -f Terminates the directory list and signals the start of filenames. It must be used when any of the -B, -M, or -S options is used. -l Output the list of effective lookup paths that whereis is using. When none of -B, -M, or -S is specified, the option will output the hard-coded paths that the command was able to find on the system. -g Interpret the next names as a glob(7) patterns. whereis always compares only filenames (aka basename) and never complete path. Using directory names in the pattern has no effect. Don’t forget that the shell interprets the pattern when specified on the command line without quotes. It’s necessary to use quotes for the name, for example: whereis -g 'find*' -h, --help Display help text and exit. -V, --version Print version and exit.
# whereis > Locate the binary, source, and manual page files for a command. More > information: https://manned.org/whereis. * Locate binary, source and man pages for ssh: `whereis {{ssh}}` * Locate binary and man pages for ls: `whereis -bm {{ls}}` * Locate source of gcc and man pages for Git: `whereis -s {{gcc}} -m {{git}}` * Locate binaries for gcc in `/usr/bin/` only: `whereis -b -B {{/usr/bin/}} -f {{gcc}}` * Locate unusual binaries (those that have more or less than one binary on the system): `whereis -u *` * Locate binaries that have unusual manual entries (binaries that have more or less than one manual installed): `whereis -u -m *`
git-daemon
A really simple TCP Git daemon that normally listens on port "DEFAULT_GIT_PORT" aka 9418. It waits for a connection asking for a service, and will serve that service if it is enabled. It verifies that the directory has the magic file "git-daemon-export-ok", and it will refuse to export any Git directory that hasn’t explicitly been marked for export this way (unless the --export-all parameter is specified). If you pass some directory paths as git daemon arguments, the offers are limited to repositories within those directories. By default, only upload-pack service is enabled, which serves git fetch-pack and git ls-remote clients, which are invoked from git fetch, git pull, and git clone. This is ideally suited for read-only updates, i.e., pulling from Git repositories. An upload-archive also exists to serve git archive. --strict-paths Match paths exactly (i.e. don’t allow "/foo/repo" when the real path is "/foo/repo.git" or "/foo/repo/.git") and don’t do user-relative paths. git daemon will refuse to start when this option is enabled and no directory arguments are provided. --base-path=<path> Remap all the path requests as relative to the given path. This is sort of "Git root" - if you run git daemon with --base-path=/srv/git on example.com, then if you later try to pull git://example.com/hello.git, git daemon will interpret the path as /srv/git/hello.git. --base-path-relaxed If --base-path is enabled and repo lookup fails, with this option git daemon will attempt to lookup without prefixing the base path. This is useful for switching to --base-path usage, while still allowing the old paths. --interpolated-path=<pathtemplate> To support virtual hosting, an interpolated path template can be used to dynamically construct alternate paths. The template supports %H for the target hostname as supplied by the client but converted to all lowercase, %CH for the canonical hostname, %IP for the server’s IP address, %P for the port number, and %D for the absolute path of the named repository. After interpolation, the path is validated against the directory list. --export-all Allow pulling from all directories that look like Git repositories (have the objects and refs subdirectories), even if they do not have the git-daemon-export-ok file. --inetd Have the server run as an inetd service. Implies --syslog (may be overridden with --log-destination=). Incompatible with --detach, --port, --listen, --user and --group options. --listen=<host_or_ipaddr> Listen on a specific IP address or hostname. IP addresses can be either an IPv4 address or an IPv6 address if supported. If IPv6 is not supported, then --listen=hostname is also not supported and --listen must be given an IPv4 address. Can be given more than once. Incompatible with --inetd option. --port=<n> Listen on an alternative port. Incompatible with --inetd option. --init-timeout=<n> Timeout (in seconds) between the moment the connection is established and the client request is received (typically a rather low value, since that should be basically immediate). --timeout=<n> Timeout (in seconds) for specific client sub-requests. This includes the time it takes for the server to process the sub-request and the time spent waiting for the next client’s request. --max-connections=<n> Maximum number of concurrent clients, defaults to 32. Set it to zero for no limit. --syslog Short for --log-destination=syslog. --log-destination=<destination> Send log messages to the specified destination. Note that this option does not imply --verbose, thus by default only error conditions will be logged. The <destination> must be one of: stderr Write to standard error. Note that if --detach is specified, the process disconnects from the real standard error, making this destination effectively equivalent to none. syslog Write to syslog, using the git-daemon identifier. none Disable all logging. The default destination is syslog if --inetd or --detach is specified, otherwise stderr. --user-path, --user-path=<path> Allow ~user notation to be used in requests. When specified with no parameter, requests to git://host/~alice/foo is taken as a request to access foo repository in the home directory of user alice. If --user-path=path is specified, the same request is taken as a request to access path/foo repository in the home directory of user alice. --verbose Log details about the incoming connections and requested files. --reuseaddr Use SO_REUSEADDR when binding the listening socket. This allows the server to restart without waiting for old connections to time out. --detach Detach from the shell. Implies --syslog. --pid-file=<file> Save the process id in file. Ignored when the daemon is run under --inetd. --user=<user>, --group=<group> Change daemon’s uid and gid before entering the service loop. When only --user is given without --group, the primary group ID for the user is used. The values of the option are given to getpwnam(3) and getgrnam(3) and numeric IDs are not supported. Giving these options is an error when used with --inetd; use the facility of inet daemon to achieve the same before spawning git daemon if needed. Like many programs that switch user id, the daemon does not reset environment variables such as $HOME when it runs git programs, e.g. upload-pack and receive-pack. When using this option, you may also want to set and export HOME to point at the home directory of <user> before starting the daemon, and make sure any Git configuration files in that directory are readable by <user>. --enable=<service>, --disable=<service> Enable/disable the service site-wide per default. Note that a service disabled site-wide can still be enabled per repository if it is marked overridable and the repository enables the service with a configuration item. --allow-override=<service>, --forbid-override=<service> Allow/forbid overriding the site-wide default with per repository configuration. By default, all the services may be overridden. --[no-]informative-errors When informative errors are turned on, git-daemon will report more verbose errors to the client, differentiating conditions like "no such repository" from "repository not exported". This is more convenient for clients, but may leak information about the existence of unexported repositories. When informative errors are not enabled, all errors report "access denied" to the client. The default is --no-informative-errors. --access-hook=<path> Every time a client connects, first run an external command specified by the <path> with service name (e.g. "upload-pack"), path to the repository, hostname (%H), canonical hostname (%CH), IP address (%IP), and TCP port (%P) as its command-line arguments. The external command can decide to decline the service by exiting with a non-zero status (or to allow it by exiting with a zero status). It can also look at the $REMOTE_ADDR and $REMOTE_PORT environment variables to learn about the requestor when making this decision. The external command can optionally write a single line to its standard output to be sent to the requestor as an error message when it declines the service. <directory> The remaining arguments provide a list of directories. If any directories are specified, then the git-daemon process will serve a requested directory only if it is contained in one of these directories. If --strict-paths is specified, then the requested directory must match one of these directories exactly.
# git daemon > A really simple server for Git repositories. More information: https://git- > scm.com/docs/git-daemon. * Launch a Git daemon with a whitelisted set of directories: `git daemon --export-all {{path/to/directory1}} {{path/to/directory2}}` * Launch a Git daemon with a specific base directory and allow pulling from all sub-directories that look like Git repositories: `git daemon --base-path={{path/to/directory}} --export-all --reuseaddr` * Launch a Git daemon for the specified directory, verbosely printing log messages and allowing Git clients to write to it: `git daemon {{path/to/directory}} --enable=receive-pack --informative-errors --verbose`